Cascon 2008, Richmond Hill, Ontario, Canada

This digest was created in real-time during the meeting, based on the speaker's presentation(s) and comments from the audience. The content should not be viewed as an official transcript of the meeting, but only as an interpretation by a single individual. Lapses, grammatical errors, and typing mistakes may not have been corrected. Questions about content should be directed to the originator. The digest has been made available for purposes of scholarship, posted on the Coevolving Innovations web site by David Ing.

Agenda

Introduction:  Dennis Smith

(SEI, Carnegie-Mellon University)

SOA significant impact on software development

  • But no central compass for where SOA is moving, a lot tends to be vendor driven, with own sets of tools and approach
  • Gap: needs to be a clear focus, research and challenges, otherwise research will be overlooked

Research

  • Research agenda presents a taxonomy of issues
  • Based on an ideal lifecycle for service-oriented systems

Grace Lewis:  A Research Agenda for Service-Oriented Architecture

(SEI, Carnegie-Mellon University)

Have been developing for two years

Patterns in case studies:

  • The clearer the alignment between business and technology, the better

A. SOA problem and solution space, three areas

  • 1. Domain area: e.g. health industry has created a lot of standards
  • 2. Context:  architecture, organization, personnel
  • 3. Business drivers: every organization has defined the service reasons for SOA

B. Planning space:  Service strategy

C. Solution space

  • Engineering
  • Business
  • Operations
  • Cross-cutting (the above 3)

Mapping between phases, activities and indicators

e.g. engineering research topic: architecture and design

  • Commentary:  SOA is being stretched beyond its limits
  • Initially it was asynch, now people want it to be faster, higher security
  • Thus SOA needs to evolve to meet needs
  • Case studies:  a lot were for process improvement internally, which meant that all elements are under the control of a single organization
    • But then when going across organizations, we'll have to talk about service usability

Within architecture, an example:  in context-aware SOA

  • Services can be selected based on a user invocation context requirements ane profiles
  • Current efforts: there are a lot of service discovery mechanisms, but none allow location-based services, semantic discovery, adaptive services or dynamic composition
  • OASIS 2007 seems to be runtime

Example challenges and gaps for context and awareness

  • What is context?  How is it best modeled and represented?

Another area to discuss:  Quality Assurance and Testing

  • e.g. system testing, which means end to end, but system components are distributed, and may not be available
  • Lots of tools, but they don't cover end to end, and they assume you have control (e.g. access to source code)
  • Some research into grey box testing, which is similar to this

Challenges and gaps

  • Could we provide a certification process?
  • How to specify test cases?
  • Alternatively, need to recognize that it's not possible to do end to end testing

Another example, in operations:  Monitoring

  • Service-oriented systems are diverse and distributed by nature
  • Lots of communities are doing research here:  from BPM community, autonomic community, self-healing systems ...
  • With third party markets, SLAs will become more important, but how can we monitor them?
  • How can service-oriented systems adapt at runtime?

Cross-cutting topic:  training and engineering

  • What's the link between service science and SOA
  • Service science applies more in the business sense
  • Educational programs in many countries:  is it the next cool program?
  • Research manifesto

Challenge and gaps:

  • How fit into curriculum?  Or do we need a new cross-disciplinary program?
  • Are there some things that we can extract from service science to use in SOA

Conclusions:

  • Challenges for SOA for use in "advanced" ways:  semantics, dynamic discovery and composition, real time applications
  • Need support for business side, as third-party services
  • e.g. seec.com, insurance and financial services
  • Also need some more non-vendor-sponsored surveys, based on users using SOA
  • e.g. what does a SOA governance document look like?
  • Needs to be more collaborative work beyond industry and academia, they don't go buying book buying and making reservations

[Questions]

Difference between SOA and enterprise architecture?

  • EA community sees SOA as a way to implement EA
  • Community works together, e.g. FEA and SOA community work together

SOA 2.0, as SOA on the web, or distributed?

  • SOA 2.0 as more event-driven, instead of Web 2.0 more about users
  • SOA is being pushed beyond what it was created for

Web 2.0 abandoned standards, e.g. Google Map defines their own ways, maybe better than before.  Will we have the same problem with SOA 2.0?

  • Yes, probably
  • Different vendors will go different directions, to improve performance, or improve reliability, etc.

10 years will be another standard, as academics we should focus on the invariants

  • SOA concepts will stay constant, but the technologies will change
  • Web services based versus CORBA (e.g. Credit Swiss?)

Lots of WS* standards, the form parts of a technology on which SOA can live on, but they're not SOA ... but working with them leads out of the WS stack

Interoperability

[back to top]

Hausi Muller, Runtime Monitoring of Service-Oriented Systems: Implications for Maintenance and Evolution

(University of Victoria)

Coming from the autonomic computing side, monitor a lot of things, analyze, change the system

Have so many cycles on boxes not being used, have spare cycle, we should do something with these cycles, and runtime monitoring is something we can do

  • Counter to green community:  the more we monitor, the more data, the more power we use
  • Believe that we need to do more monitoring in all computer systems, not just monitor

Steve Mills IBM white paper, June 2007:  The Rise of the Dynamic Value Set

  • Web services means can put service providers together, orchestrate value sets
  • This will require more dynamic monitoring
  • Governance will come into play

IBM Global Services Integration Maturity Model (SIMM)

Need to adapt:  both anticipated and unanticipated

  • Anticipated adaptation:  run-time contexts known at design-time
  • For unanticipated adaptation, can recognize and compute at run time
  • Purely unanticipated self-adaptive systems are rare

Need a new way to think about systems:

  • In SIMM, ...
  • ...towards the left, more architecture-centric design an dviews; 
  • ...towards the right, need more control-centric design and views for SOA
  • Associated move from satisfying requirements (architecture) as opposed to evolving to regulating requirements  (control-centric)

This change can't start from scratch, need to define and instrument measurements

Lewis and Smith ICSM FoSM 2008:  Governance is the main inhibitor to SOA adoption

  • Have (a) design time governance, and (b) run time governance
  • Need to instrument the standards and the systems

At the core, feedback loops are at the heart of dynamical systems, ubiquitous in natural and engineered systems, but not so much in computer systems

What can we do to monitor dynamical SOA systems?

  • For run-time assurance, move testing to run time
  • Need to specify requirements as environment evolves

Research issues

  • How different are maintainability concerns for self-adaptive SOA systems, compared to static, non-adaptive SOA systems?

Research challenges:

  • Model construction:  for every feedback, have a model; for every requirement, have a model; have had lots of models for performance
  • Need to learn how to manage and leverage uncertainty
  • The more dynamic the systems, the greater the need to make control loops explicit:  less information-hiding, packaging

Conclusions:

  • Traditional value chains give rise to interconnected dynamic value nets
  • Need to push monitoring to unprecedented levels
  • Move from architecture-centric to control-centric SOA orchestration
  • Distinguish between anticipated and unanticipated control
  • Need dynamic modeling and lots of instrumentation

[Questions]

How far from unanticipated systems?

  • We're there
  • SLAs higher for premium customers

[back to top]

David Ing, SSMED and SOA: Service Science, Management, Engineering and Design and Service Oriented Architecture

[presentation posted at http://coevolving.com/commons/20081030_Cascon_Ing_SSME_SOA ]

Chris Brealey, Challenges for Service Component Architecture (SCA) as an Option for Service-Oriented Systems Development

(IBM)

SOA logical architecture model

  • Interaction services, hit the web, or sensors and actuators
  • Processes
  • Information
  • Partner services, third party
  • Business application services: EJBs, POJO (plain old java object)
  • Access services (to old legacy)

SCA and programming modules:

Lead to thinking about assembly, but there's more

  • Have to deploy, run, model

What are looking for in a good programming model?

  • At the foundations, open standards, e.g. w3c, OASIS, OMG, IETF
  • Next up, complementary architectures:  can't displace web 2.0, event-oriented systems, have to engage with them
  • Transports and protocols:  any respectable SOA programming module has to support a wide variety of technologies
  • Need a separation between business concerns and qualities of service:  security, reliability, transaction and identity
  • Formalize business services:  should have a thing called a service (am working on a service versioning task force, on runtime), including service consumers, (asynchrony) a business message, service provider
  • Culture:  SOA won't change culture, but does need to enable loose coupling, asset reuse, ease of governance

Open SCA:

  • Service Component Architecture is a spec, where SOA is a style, working through OASIS
  • Alive and well in Websphere products, "classic SCA", was proprietary
  • Open SOA collaboration started in 2004 working on specs
  • End 2005, 0.9 published
  • 2007, Open SCA 1.0 was submitted to OASIS
  • At the same time, Apache foundation started Tuscany, it's a runtime, that also forms the core of the Websphere SOA runtime
  • Now focusing on development tools

Core concepts of what SCA is

  • A service has an interface, could be WSDL or Java to describe the business things it provides
  • Also specifies the languages on which it can operate:  binding
  • Can bind intent (some quality of service as abstract, e.g. reliability), policy (security, normally not configured by developers, e.g. public key and private keysgory details) and policy set
  • Component configures an implementation into a service:  a choice of implementation (e.g. in Java or BPEL or ...)

Can then compose and wire components into a composite

  • Could be recursive ... and have similar intents, policy, public set)

Have a formal definition of ways to do callbacks: just defining who you're calling, and how they call you back, without impacting your business logic

Challenges:

  • Batch:  as important today, as ever, but there's no magic 2 a.m. to 6 a.m. window when you could lock out users, because someone in Thailand will want to access
    • May need to break up, refactor
    • Might want to refactor as an invokable system for SOA
    • Batch on services, versus services on batch
  • SCA could provide a useful programming model, where could inject events
  • Versioning:  when a consumer requests a change, what's the governance process?  Can other vote on changes?  Do they replace the existing service?  Do they sunset old services?  How to establish a trace back to requirements?

[Questions]

What document could be used to explain in a university second-year course?

  • Specs, about an inch thick, wouldn't suggest reading it over a weekend
  • Could start with David Chappell article, 25 pages
A lot of challenge in the versioning is social, not technical

[break]

[back to top]

Scott Tilley, Towards a Soft Solution to Hard Problems in SOA Testing

(Florida Institute of Technology)

Also addresses autonomic computing (cycles not being used), and cloud computing

Motivation:  describe SOA testing agenda, in the broader SOA agenda

  • Testing isn't as well addressed as design or software engineering
  • A lot of unknowns

Will focus on one area researched, regression testing, with example in HadoopUnit

  • Attempting to offer solutions, not just problems

SOA is evolutionary development

  • Can look to control, theory, outsourcing
  • Thus, people do need to know a lot, but it is evolutionary

What part of a multi-layered system are testing?

  • Even unit testing needs a better description
  • Think of testing as a vertical slice, but the topology will change from situation to situation

3 broad areas:

  • Testing governance: enforcement and management of policy and procedures, SLAs
  • Testing underlying technologies:  e.g. CORBA to implement a SOA-based system
  • Gap analysis:  Applying traditional testing techniques, which applicable and which need to change
    • e.g. black box testing (service consumer, service provider) different from white box testing (are you looking at WSDL document, or traffic, or SLA?)
    • e.g. unit testing:  is the unit a single service, or a composite of service, or coarse-grained service, or (from software engineering) the implementation of one or more services?

Regression testing:  difficult, important

  • If agile, test-driven development, constant shifting from unit test to development
  • Regression testing is running a lot of unit tests to make sure that changes haven't broken anything else
  • Online versus batch is discussed, because they may run overnight (and take a long time)
  • Thus, a lot of work on reducing the number of tests:  optimization, but don't want to over-prune
  • Can we reduce the execution time instead of reducing the number of test cases?  Suppose we had cloud computing?

HadoopUnit:  Distributed execution framework for JUnit test cases

  • Hadoop provides data processing in a distributed manner
  • Goal: regression testing, not in batch mode
  • Want continuous regression testing in the cloud / cluster / grid

Hadoop:  "bring the computation to the data", to manage terabytes of data

  • Replicate copies of the processing to the data
  • Students will only work on things that are easy and free:  commodity hardware, free operating system, open source
  • Frees programmer from having to know about distributed computing
  • Hierarchical file system, split and scattered over data store
  • Data blocks are replicated to several nodes
  • Map/Reduce algorithm by Google in 2004
  • It was used for indexing and crawling, we're applying to testing

Hadoop unit

  • Programmer only has to write the map and reduce

Used HadoopUnit to test Hadoop itself

Issues

What if don't have Hadoop?

  • Maybe use some P2P setup, or use a commercial offering like Amazon EC2
  • No guidelines on how to use map/reduce and set it up

[back to top]

Kostas Kontogiannis, In-Context Challenges and Research Topics in SOA

(U. Waterloo)

Conflicting reports:  By 2011, 40% of companies will have SOA, yet 5% decline on survey reports over the last year

  • Too much to be done, too few people to do it

Solution target

  • Should aim for a simplified model for development of business services (less interested in developing new ones, more interest in composing
  • Assembly and deployment of solution built as networks of services
  • Increased agility and flexibility
  • Protection of business logic assets by shield from low-level technology changes
  • Improve testability, stability

Challenges: start from the SOA logical architecture model

Key research challenges:

  • SOA programming model
  • Role of events in SOA, i.e. event-driven
  • Runtime infrastructure and system management
  • Metadata and semantics
  • UI and human oriented SOA: variety of clients and devices
  • SOA tooling for modeling, design, testing, maintenance:  RSA goes the right way by supporting plugins

SOA programming models have already been discussed

  • It's a collection of models, techniques and methodologies for implementing services and assemblying them into solutions
  • Simplify abstractions, so that non-technical people can interact
  • Abstractions for service composition languages:  have BPEL, but how many people can write BPEL
  • Business analysis and simulation tools:  before deploying the solution, what can we find out about how it will work?

Start with SCA models and implementations

  • Leads to domain-specific languages
  • Generate annotated code
  • Configures to platform specific application and infrastructure code
  • Links to or assembles to infrastructure to perform SOA management

Programming model issues:

  • Definition of a service, have done a lot of work, more to be done

Tooling

Open Discussion

What's after SOA?

  • SEI has a report on systems of systems

[back to top]

Previous Post Next Post

2008/10/30 Cascon Workshop on SOA Research Challenges: Current Progress and Future Challenges