Why SOA and MDM didn't go together
Friday, December 31, 2010
Why SOA and MDM didn't go together
Monday, December 20, 2010
- When does Moore's Law go away?
- When is it really a cloud
- What decides where a service is deployed?
- Why can't it be deployed to my phone?
Monday, December 13, 2010
MDM provides 3 core facilities
- Cross-referencing of core entities between systems
- Standardisation around the critical "matching" attributes
- Synchronisation of attributes modified within multiple systems
Friday, December 10, 2010
I disagree, the stagnation of Java and its issues very much started under Sun as the JavaSE 6 debacleshowed. The problem that Oracle have actually made is in leaving the same mentality and people in charge of Java rather than actually looking to refresh the leadership and focus it more on the Java market rather than an internal view of what that market should be.
So I don't blame Oracle for this debacle in the same way as I don't blame Oracle for putting JAX-WS into JavaSE or the massive amount of time that JavaSE 7 has taken. The reality is that Java lost its direction and started chasing "Joe-sixpack" and while Sun paid lip-service to Open Source they actually meant "their" open source when it came to Java rather than opening up to Apache.
As someone who championed, and still champions, Java as an environment it has been sad to see how intellectually stunted Java has become in the last 5 years and how myopic its leadership has been. That leadership appears to have made it through the acquisition pretty much unscathed and the attitudes have if anything become more hardline and more myopic due to the protection of a larger parent company.
Java needs new leadership, the current fiasco and the comments on the votes show that the current Java leadership in Oracle has the same problems of consensus building and intellectual direction as they had 5 years ago. Oracle has some fantastic intellectuals and some great leaders who can build consensus in the Java community but the bravest thing for them to do now would be to open up the door and appoint a leadership team from outside potentially one that included real representation from the major players and industry.
Oracle aren't the problem, they've just inherited the problem child and let the bad behaviour continue
Monday, November 15, 2010
- Can your IT estate be described as a series of discreet elements
- Can each of these elements be easily maintained within their business context
- Can each of these elements be simply described
Long term evolution over short term expediencyArchitectural clarity over coding efficiencyBusiness Strategy over IT strategy
Drive IT costs in-line with their business valueDrive IT technology selection in-line with the business valueCreate clear upgrade boundaries between different business value areasManage IT based on the different business value areas
Friday, October 29, 2010
Well lets compare
Windows 7 Mobile
The difference remains. Apple have the confidence to show you people actually using their phone, Microsoft have the confidence to not show you people actually using the phone but doing other stuff to pretend that their phone is cool
If people can't even fake an advert to make a phone look useable, what does that say about the devices themselves?
Sunday, October 17, 2010
People screw up MDM programmes by forgetting what MDM actually is.The first bit is that people look at the various different MDM packages and then really miss the point. Whether its Oracle UCM, Informatica, SAP MDM, IBM MDM, Initiate, TIBCO or anything else people look at it and go...
Right so this is what we do, what else can we fit into itNow this is a stupid way to behave but its what most people do. The use the MDM piece as the starting point. The reality is that MDM is only about two things
- The cross references of the core entity between all the systems in the IT estate
- The data quality and governance around the core entity to uplift its business value
- Things - assets, accounts, parts, products, etc
- Parties - individuals, organisations, customers, suppliers, etc
- Locations - postal addresses, email address, web addresses, physical address, geo locations, etc
- Parties to Things
- Parties to Locations
- Things to Locations
- (Parties to Things) to Locations (e.g. a persons specific account address)
Thursday, October 14, 2010
Having the "services" as clean though is pointless if what you have under the covers is just the same old crud with some REST or WS-* lipstick on top, you actually have to have an implementation that is clean all the way down or you are still screwed.
The BSB Specification was based around that principle of doing one thing well, and the whole point of the DSB/BSB split is to keep it simple.
This then becomes the real issue, its actually really hard to architect and deliver simply. In the MDM space for instance you see MDM solutions that morph into MDM + ODS + Reference Data Management solutions. "clean" ERP installations are destroyed by customisation and the Java solution gets some crufty bolt ons because "it was easier to do it there". The delivery builds the blob with lipstick on it and suddenly we are no better off.
Why does this happen? Well more and more I believe its because the SIMPLE pictures that describe a business architecture are either not drawn at all or are abandoned because of their simplicity. People, architects especially, don't like putting in place the rigour and control that is required to deliver a simple solution, its much easier to deliver a blob and let people cope with it in support. Simplicity isn't a valued commodity because it doesn't allow people to show off their understanding of complexity.
"Je N'ai fait celle-ci plus longue que parceque je n'ai pas eu le loisir de la faire plus courte.
--I have only made this letter rather long because I have not had time to make it shorter."
Pascal. Lettres provinciales, 16, Dec.14,1656. Cassell's Book of Quotations, London,1912. P.718.
Simplicity takes time and effort and the end result is much more satisfying, easier to explain, easier to maintain and easier to use. Most people however take the easy route to complexity.
Monday, September 06, 2010
So you carry around your movies, in conjunction with your Apple cloud service and thus it is all on demand as you travel around your life. Get in the car and the kids can get it, get home and you can use the TV. Go to a friends house and it is all sorted.
Seriously is it that hard to spot? Apple TV isn't a product its just a dongle that can be integrated into TVs based on the iPod, iPhone, iPad dominance. Airplay is the real thing, Apple aren't aiming at the set top box market or even the movie rental market, they want the last metre between you and the display.
Saturday, July 03, 2010
Why is MDM important for SOA? Well there are a couple of reasons.
- MDM stops you having to do Single Canonical Form
- MDM helps you start from a point of federation
- Digital Landfill MDM
- Operational MDM
- Federated MDM
Monday, June 28, 2010
1) They ran out of stock
So I pointed out that they'd accepted the order from me, given me a fixed delivery date (the order went in at 2am on the Thursday and was confirmed at 9:05am). This seemed rather bizarre that they'd run out of stock that quickly
That is when it got a lot more interesting.
Then I was told that
2) "The phones have all been recalled as they've got an antenna problem and they keep crashing"
I pushed on this just to check and it was confirmed that the reason that Orange don't have any stock is because their has been a recall due to the antenna issue. The call centre drone said that the iPhone antenna issue was one thing and also that the phones kept on crashing.
I asked why, if it was a result of a recall why they hadn't emailed me about it, the reason was that
3) Their system was so overloaded that it couldn't handle the volumes.
I pointed out that they regularly seem to spam in pretty large volumes but apparently the iPhone is much higher in volume.
There was no hint of an apology and the stock line was "7-10 days or a full refund".
So either its straight Orange incompetence with a rubbish excuse or there are some major league iPhone 4 issues.
Friday, June 25, 2010
One area that I've seen consistently as a problem over the years though is down to how the package vendors have thought about physical and electronic addresses. When the packages were created there were really only one set of important addresses, physical addresses. Phone numbers were pretty much fixed to those premises and email was a million miles away from the mind. This means that the data models tend to look at electronic addresses as very much second class citizens, normally as some form of child table rather than as a core entity.
The trouble is that as packages are being updated I'm seeing this same mistake being made again with some of the new technology models being used by vendors (AIA from Oracle appears to make the mistake). The reality is that the model is pretty simple
That really is it. There are two key points here
- Treat all actors as a single root type (Party) then hang other types off that one
- Do the same for Locations
Monday, June 21, 2010
What stopped us shifting? Well a little bit of compliance, which we might have overcome, but the big stopper were the tin-huggers.
Tin-huggers are people who live by the old adage "I don't understand the software, I don't understand the hardware but I can see the flashing lights" which I've commented on before.
Tin-huggers love their tin, they love the network switches, they love the CPU counts and worrying about "shared", "dedicated", "virtualised" and all of those things. They love having to manually upgrade memory and having to select storage months or years in advance. Above all of these things they love the idea that there is a corner of some data centre that they could take their tin-hugging mates into and point and say "that is my stuff".
Tin-huggers hate clouds because they don't know where the data centre is and their tin-hugger mates would laugh at them and say "HA! Google/Amazon/Microsoft/etc own that tin, you've just got some software". This makes the tin-hugger sad and so the tin-hugger will do anything they can to avoid the cloud. This means they'll play the FUDmeister card to the max and in this they have a real card to play...
Tin-huggers are the only ones who work in hardware infrastructure design, software people couldn't give a stuff.
This means its all tin-huggers making the infrastructure decisions, so guess what? Cloud is out.
Tin-huggers are yet another retarding force on IT. Sometimes the software folks can get it out and work with the business but too often the TIn-hugging FUDmeistering is enough to scare the business back into its box.
Its time to build a nice traditional bypass right through the tin and into the cloud and let the tin-huggers protest from their racks as we demolish them from underneath their feet.
Sunday, June 20, 2010
- Art v Engineering is still the big problem - 90%+ of people in IT aren't brilliant, a large proportion aren't even good
- Contracts really matter - without them everything becomes tightly bound no matter what people claim about "dynamism"
- No technology has ever, or will ever, deliver a magnitude increase in performance
- The hardest part of IT is integrating systems and service not integrating people. People are good at context shifting and vagueness, good interfaces are fantastic optimisers but even average user interfaces can be worked around by users.
- Its as hard to do a large scale programme with a large integration bent as it was 5 years ago.
- There are less really good enterprise qualified developers as they've got "dynamic" language experience and struggle, or worse bring the dynamic language in and everyone else struggles
- Vendors have been left to their own devices which has meant less innovation, less challenge and higher costs as the Open Source crowd waste time on pet projects that aren't going to dent enterprise budgets
- Define your interfaces, nail them down, get them right
- Test like a bastard to those interfaces
- Most people in IT appear to know bugger all about it. Now I continue to be surprised at how little people who work in IT read about what is going on, but I was really surprised at how little traction REST had.
- EVERYTHING is manual and there is still no standardised way to communicate on WTF you are doing between teams
- Detail the Business Services - including the "nearby" Business Services that aren't in your scope - this tells people at a high level what you are, and aren't, doing
- Create a "Business Catalogue" that details the fine detailed capabilities that the programme will be delivering.
- Map the catalogue to the Out of the Box (OOTB) functionalities in the package
- Create a strong governance approach around managing changes to the catalogue
- Document the capabilities using use cases. This gives you the explicit definition of scope that you need the enterprise to accept. These use cases are documented based on the OOTB what the package does, rather than being requirement gathering exercises
Saturday, April 24, 2010
First off some ground rules, what I mean by this is that if you are in a key review position then you should be setting the expectations on what you consider to be good. So before people even start creating the stuff you are going to review spend 5 minutes with them just giving them some context on what you are looking for. This might be as simple as outlining where their piece fits into the broader picture or just making sure they have the right clarity on how they should be structuring what they have been set to do. This initial piece will save you a huge amount of pain later on.
So now when you get to the actual review you should at least be talking more about the content than wasting time telling someone that they've not created it properly and have to do some major rework.
So on into the review. I'm assuming here that you don't use design/requirement/code reviews to bollock people as that would be completely counter productive. If there are big issues pull them aside 1-on-1 later and have the discussion. So that said how to get people to learn from their mistakes?
The key here is language. There are some great phrases and some bad phrases. Lets say that someone has written something down that just isn't clear you can say
a) "This just isn't clear what you are trying to say"
b) "I'm confused around this bit, could you explain what you mean"
Now the former says "Crap work" the second says "Its probably me but lets just check" 9/10 times they'll explain in detail what they mean and you can say the magic words
"Great, now I understand it, you might like to write out what you've just said so no-one else gets confused"
Now lets say they've got something plain wrong you can say either
a) "That's just wrong"
b) "Umm what would the implications be if we do this?"
Then with b) you go into a discussion where you challenge them with points like "I see, but wouldn't X apply here?". This way you get to find out if its a mistake or they are actually a bit thick. If you do the former then you'll never get to know.
Now lets say there is an area where you realise that something you've done isn't clear and the person you are reviewing would benefit if it was clarified (for instance there is a diagram missing which would help explain their area). This is where you get to make the reviewee feel really good AND get work off your plate. The point here is to say something like
"I've just realised that I really should have created a diagram about Y by now as that would help you explain this area. I tell you what could you have a go at creating it and then we'll make sure that everyone sees it once we've got it right"
Here if you are a senior reviewer you are not only helping the person, and getting work away from yourself, you are really making the reviewee want to demonstrate that they can do a good job. That is the main aim with reviews. Catch the errors, help people improve and keep up morale. Kicking people in reviews for errors just doesn't make sense.
Pull the problems onto yourself, have the reviewee explain them and hopefully (if they aren't a muppet) they'll come to the right answer themselves, they'll think you are a great coach and they'll want to work harder for you.
The same does not apply to managers when reviewing project plans that are rubbish, they must be beaten about the head with a stick.
Tuesday, April 06, 2010
Okay so I talked about Anti-Principles so now I thought I'd talk about the final thing I list to list out in the principles sections of the projects I do. The non-principles this might sound like an odd concept but its one that has really paid dividends for me over the years. While Principles say what you should do and anti-principles say what you shouldn't the non-principles have a really powerful role.
A non-principle is something that you don't give a stuff about. You are explicitly declaring that its not of importance or consideration when you are making a decision.
While you can evaluate something against a principle to see if it is good or against a non-principle to see if it is bad the objective of the non-principles is to make clear things that shouldn't be evaluated against. In Freakonomics Steven Levitt talks about "Received Wisdom" and how its often wrong. I like to list out pieces in the non-principles that are those pieces of received wisdom and detail why they aren't in fact relevant.
Scenario 1 - Performance isn't the issue
A while ago I worked on a system where they were looking at making changes to an existing system. A mantra I kept hearing was that performance was a big problem. People were saying that the system was slow and that any new approach would have to be much quicker. So I went hunting for the raw data. The reality was that the current process which consisted of about 8 stages and one call to a 3rd party system was actually pretty quick. The automated testing could run through the process in about 6 seconds with 5 of those being taken up by the 3rd party system (which couldn't be changed) in other words the system itself was firing back pages in around 125 milliseconds which is lightning quick.
So in this case a core non-principle was performance optimisation. The non-principle was
"Any new approach will not consider performance optimisation as a key criteria"
This isn't an anti-principle as clearly building performant systems is a good thing but for our programme it was a non-principle as our SLA allowed us to respond in 3 seconds per page (excluding that pesky 3rd party call) so things that improved other core metrics (maintainability, cost of delivery, speed of delivery, etc) and sacrificed a little performance were okay.
Scenario 2 - Data Quality isn't important
The next programme was one that was looking to create a new master for product and sales information, this information was widely seen as being of a very poor quality in the organisation and there was a long term ambition to make it better. The first step however was to create the "master index" that cross referenced all the information in the existing systems so unified reports could be created in the global data warehouse.
Again everyone muttered on about data quality and in this case they were spot on. The final result of the programme was to indicate some serious gaps in the financial reporting. However at this first phase I ruled out data quality as being a focus. The reason for this was that it was impossible to start accurately attacking the data quality problems until we had a unified view of the scale of the problem. This required the master index to be created. The master index was the thing that indicated that a given product from one system was the same as one sold from another and that the customer in one area was the same as the customer in another. Once we had that master index we could then start cleaning up the information from a global perspective rather than messing around at a local level and potentially creating bigger problems.
So the non-principle for phase 1 was
"Data Quality improvement will not be considered as an objective in Phase 1, pre-existing data issues will be left as is and replicated into the phase 1 system"
This non-principle actually turned out to be a major boon as not only did it reduce the work required in Phase 1 it meant that the reports that could be done at the end of phase 1 really indicated the global scale of the problem and were already able to highlight discrepancies. Had we done some clean up during the process it wouldn't have been possible to prove that it wasn't a partial clean-up that was causing the issues.
Scenario 3 - Business Change is not an issue
The final example I'll use here is a package delivery programme. The principles talked about delivering a vanilla package solution while the anti-principles talked of the pain of customisation. The non-principle outlined however a real underpinning philosophy of the programme. We knew that business change was required, hell we'd set out on the journey saying that we were going to do it. Therefore we had a key non-principle
"Existing processes will not be considered when looking at future implementation"
Now this might sound harsh and arrogant but this is exactly what made the delivery a success. The company had recognised that they were buying a package because it was a non-differentiating area and that doing the leading practice from the package was where they wanted to get to. This made the current state of the processes irrelevant for the programme and made business change a key deliverable. This didn't however mean that business change was something we should consider when looking at process design. We knew that there had to be change, the board had signed off on that change and we were damned well going to deliver that change.
This non-principle helped to get the package solution out in a very small timeframe and made sure that upgrades and future extensions would be simple. It also made sure that everyone was focusing on delivering the change and not bleating about how things were done today.
So the point here is that the non-principles are very context specific and are really about documenting the perceived wisdom that is wrong from the programme perspective. The non-principles are the things that will save you time by cutting short debates and removing pointless meetings (for instance in Scenario 1 a whole stream of work was shut down because performance was downgraded in importance). Non-principles clearly state what you will ignore, they don't say what is good or bad because you shouldn't measure against them (e.g. in Scenario 3 it turned out that one of the package processes was identical to an existing process, this was a happy coincidence and not a reason to deliberately modify the package).
So when you are looking at your programme remember to document all three types of principles
- The Principles - what good looks like and is measured against
- The Anti-Principles - what bad looks like and is measured against
- The non-principles - what you really couldn't give a stuff about
All three have power and value and missing one of them out will cause you pain.
Friday, February 19, 2010
The problem is that there is another concept that is rarely listed, what are your anti-principles?
In the same way as Anti-Patterns give you pointers when its all gone wrong then Anti-Principles are the things that you will actively aim to avoid during the programme.
So in an SOA programme people will fire up principles of "Loose Coupling", "Clear Interfaces" and the like but they often won't list the Anti-Principles. These are often more important than the Principles. These are the things that indicate danger and disaster.
So what are good (bad?) SOA anti-principles?
Thursday, February 04, 2010
To my mind that view point is just like the fake-Agile people who don't document because they can't be arsed rather than because they've developed hugely high quality elements that are self-documenting. Its basically saying that everyone has to wait until the system is operable before you can say what it does. This is the equivalent of changing the requirements to fit the implementation.
Now I'm not saying that requirements don't change, and I'm not advocating waterfall, what I'm saying is that as a proportion of time allocated in an SOA programme the majority of the specification and design time should be focused on the contracts and interactions between services and the minority of time focused around the design of how those services meet those contracts. There are several reasons for this
- Others rely on the contracts, not the design. The cost of getting these wrong is exponential based on the number of providers. With the contracts in place and correct then people can develop independently which significantly speeds up delivery times and decreases risk
- Testing is based around the contracts not the design. The contract is the formal specification, its what the design has to meet and its this that should be used for all forms of testing
- The design can change but still stay within the contract - this was the point of the last post
As people rush into design and deliberately choose approaches that require them to do as little as possible to formally separate areas and enable concurrent development and contractual guarentees they are just creating problems for themselves that professionals should avoid.
Contracts matter, designs are temporary.
SOA is often talked about as helping this evolutionary approach as services are easier to change. But is the reality that actually IT is hindered by this myth of evolution? Should we reject evolution and instead take up arms with the Intelligent design mob?
I say yes, and what made me think that was reading from Richard Dawkins in The Greatest Show on Earth: The Evidence for Evolution where he points out that quite simply evolution is rubbish at creating decently engineered solutions
When we look at animals from the outside, we are overwhelmingly impressed by the elgant illusion of design. A browsing giraffe, a soaring albatross, a diving swift, a swooping falcon, a leafy sea dragon invisible amoung the seaweed [....] - the illusion of design makes so much intuitive sense that it becomes a positive critical effort to put critical thinking into gear and overcome the seductions of naive intuition. That's when we look at animals from the outside. When we look inside the impression is opposite. Admittedly, an impresion of elegant design is conveyed by simplified diagrams in textbooks, neatly laid out and colour-coded like and engineer's blueprint. But the reality that hits you when you see an animal opened up on a dissecting table is very different [....] a haphazard mess that we actually see when we open a real chest.
This matches my experience of IT. The interfaces are clean and sensible. The design docs look okay but the code is a complete mess and the more you prod the design the more gaps you find between it and reality.
The point is that actually we shouldn't sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.
In otherwords the point of SOA is that the internals are always going to be a mess and we are always going to reach a point where going back to the whiteboard is a better option than the rubbish internal wiring that we currently have. This mentallity would make us concentrate much more on the quality of our interfaces and contracts and much less on technical myths for evolution and dynamism which inevitably lead into a pit of broken promises and dreams.
So I'm calling it. Any IT approach that claims it enables evolution of the internals in a dynamic and incremental way is fundamentally hokum and snake oil. All of these approaches will fail to deliver the long term benefits and will create the evolutionary mess we see in the engineering disaster which is the human eye. Only by starting from a perspective of outward clarity and design and relegating internal behaviour to the position of a temporary implementation will be start to create IT estates that genuinely demonstrate some intelligent design in IT.
PS. I'd like to claim some sort of award for claiming Richard Dawkins supports Intelligent Design
Monday, January 25, 2010
Starting with the business architecture that means picking your approach to defining the business services. Now you could use my approach or something else but what ever you do it needs to be consistent across the project and across the enterprise if you are doing a broader transformation programme.
On requirements its about structuring those requirements against the business architecture and having a consistent way of matching the requirements against the services and capabilities so you don't get duplication.
These elements are about people processes and documentation and they really aren't hard to set up and its very important that you do this so your documentation is in a consistent format that flows through to delivery and operations.
The final area are the technical standards and this is the area where there really is the least excuse. Saying "but its REST" and claiming that everything will be dynamic is a cop-out and it really just means you are lazy. So in the REST world what you need to do is
- Agree how you are going to publish the specifications to the resources, how will you say what a "GET" does and what a "POST" does
- Create some exemplar "services"/resources with the level of documentation required for people to use them
- Agree a process around Mocking/Proxying to enable people to test and verifying their solutions without waiting for the final solution
- Agree the test process against the resources and how you will verify that they meet the fixed requirements of the system at that point in time
So with REST there are things that you have to do as a project and programme and they take time and experience and you might get them wrong and need them updating. If you've chosen to go Web Services however and you haven't documented your standards then to be frank you really shouldn't be working in IT.
So in Web Service world it really is easy. First off do you want to play safe and solid or do you need lots of call-backs in your Web Services. If you are willing to cope without callbacks then you start off with the easy ones
- WS-I Basic Profile 1.1
- WSDL 1.1
- SOAP 1.1
Next up you need to decide if you are going WS-* and if so what do you want
- WS-Security - which version, which spec
- WS-RM - which version, which spec
- WS-TX - your kidding right?
The other pieces is to agree on your standard transport mechanism being HTTP. Seriously its 2010 and its about time that people stopped muttering "performance" and proposing an alternative solution of messaging. If you have real performance issues then go tailored and go binary but 99.999% of the time this would be pointless and you are better off using HTTP/S.
You can define all of these standards before you start a programme and on the technical side there really is little excuse in the REST world and zero excuse in the WS-* world not to do this.
Saturday, January 09, 2010
I think one of the still, largely unrecognized issues is that developers really should be designing services as RPC interfaces, always. Then, different service interface schemes, such as SOAP, HTTP (Rest et.al.), Jini, etc., can be more easily become a "deployment" technology introduction instead of a "foundation" technology implementation that greatly limits how and under what circumstances a service can be used. Programming Language/platform IDEs make it too easy to "just use" a single technology, and then everything melds into a pile of 'technology' instead of a 'service'.
The point here is that conceptually RPC is very easy for everyone to understand and at the highest levels it provides a consistent view. Now before people shriek that "But RPC sucks" I'll go through how it will work.
First off lets take a simple three service system where from an "RPC" perspective we have the following:
The Sales Service which has capabilities for "Buy Product" and "Get Forecast"
The Finance Service which has capabilities for "Report Sale" and "Make Forecast"
The Logistics Service which has capabilities for "Ship Product" and "Get Delivery Status"
There is also a customer who can "Receive Invoice"
Now we get into the conceptual design stage and we want to start talking through how these various services work and we use an "RPC language" to start working out how things happen.
RPC into Push
When we call "Make Forecast" on the Finance Service it needs to ask the Sales Service for its Forecast and therefore does a "Get Forecast" call on the Sales Service. We need the Forecast to be updated daily.
Now when we start working through this at a systems level we see that the mainframe solution of the Finance team is really old and creaky but it handles batch processing really well. Therefore given our requirement for a daily forecast what we do is take a nightly batch out of the CRM solution and Push it into the Mainframe. Conceptually we are still doing exactly what the RPC language says in that the data that the mainframe is processing has been obtained from the Sales area, but instead of making an RPC call to get that information we have decided in implementation to do it via Batch, FTP and ETL.
RPC into Events
The next piece that is looked at is the sales to invoice process Here the challenge is that historically there has been a real delay in getting invoices out to customers and it needs to be tightened up much more. Previously a batch has been sent at the end of each day to the logistics and finance departments and they've run their own processing. This has led to problems with customers being invoiced for products that aren't shipped and a 48 hour delay in getting invoices out.
The solution is to run an event based system where Sales sends out an event on a new Sale, this is received by both Finance and the Logistics department . The Logistics department then Ships the Product (Ship Product) after which it sends a "Product Shipped" event which results in the Finance department sending the invoice.
So while we have the conceptual view in RPC speak we have an implementation that is in Event Speak.
RPC into REST
The final piece is buying the products and getting the delivery status against an order. The decision was made to do this via REST on a shiny new website. Products are resources (of course), you add them to a shopping basket (by POSTing the URI of the product into the basket) this shopping basket then gets paid for and becomes an Order. The Order has a URI and you just simply GET to have the status.
So conceptually its RPC but we've implemented it using REST.
Conceptual v Delivery
The point here is that we can extend this approach of thinking about things in RPC terms through an architecture and people can talk to each other in this RPC language without having to worry about the specific implementation approach. By thinking of simply "Services" and "Capabilities" and mentally placing them as "Remote" calls from one service to another we can construct a consistent architectural model.
Once we've agreed on this model, that this is what we want to deliver, we are then able to design the services using the most appropriate technology approach. I'd contend that there really aren't any other conceptual models that work consistently. A Process Model assumes steps, a Data Model assumes some sort of entity relationship a REST model assumes its all resources and an Event model assumes its all events. Translating between these different conceptual models is much harder than jumping from a conceptual RPC model that just assumes Services and Capabilities with the Services "containing" the capabilities.
So the basic point is that architecture, and particularly business architecture, should always be RPC in flavour. Its conceptually easier to understand and its the easiest method to transcribe into different implementation approaches.