Monday, May 18, 2009

Silverlight, ActiveX, AIR, Sandbox, and Offline

1. Silverlight 2 stand-alone with ActiveX

Silverlight 2 stand-alone html application (it is better to change the .html to .hta, the host is MSHTA.EXE), using javascript (file mode, so full trust) to call activeX (note that this approach is not cross platform).

2. If the requirement is not offline, but to access files and ports, can use special ActiveX with javascript.

3. Silverlight 3 OOB is not that useful

--a. Silverlight 3 OOB is within browser sandbox, so, cannot access file etc.
--b. Silverlight 3 OOB (out of browser) cannot use html/javascript, so, it cannot be used together with the hta technique.

4. The above is not cross-platform (but for intranet, it is good that it does not need to install any “extra” stuff), for that, AIR is the way to go.

5. Is it possible to use both AIR and pure Silverlight3? Not sure: (i) Can Silverlight3 be invoked from AIR? (ii) Silverlight cannot invoke AIR (sandbox), but it can save to the storage, and AIR can poll that. (iii) Silverlight cannot mash up AIR. (iv) AIR perhaps can mash up Silverlight. As a result, it looks like we can use AIR as the “shell” (base application), and invoke Silverlight screens (because it is in sandbox, so, it does not know other instances; so, the shell must be very lightweight and can be instantiated many times without performance issues).

6. For Serial port etc, it needs a generic "proxy" that translates port to socket(). Javascript can then use socket. Note that this makes it not portable. So, if we do not need offline feature, then, we can simply develop small ActiveXes to access things out of the sandbox, and use Silverlight for UI, and use javascript to glue things togother.

The so-called process is actually parts of the architecture, and the so-called architecture is actually parts (use cases) of the process

The so-called process is actually parts of the architecture, and the so-called architecture is actually parts (use cases) of the process. Both of them are about three things:

1. use cases
2. data fields
3. architecture use cases (transaction-failure handling; network-performance plus offline client, reports, direct database jobs; security, auditing)

Key understanding:

1. use cases: this is the art. The core of the art is to find the representative use cases so that they are only a few but cover all data fields. Also note that they need to be described using UI terms, however, they are use cases -- using UI language and eventually doing UI design do not mean you need to jump on UI design in the very beginning.

2. data fields: this is the science. All computing are based on this, including object oriented programming -- the core of domain model is in data model. However, note that this does not mean you need to jump on database physical design in the very beginning.

3. Architecture use cases. They are use cases. "Architecture" is not magic; they must be organized in use cases. Users can and must understand them -- that is, if you organize them in use cases.

Monday, February 16, 2009

documents and paper computer

documents and paper computer

In addition to "use cases" and "data fields (data points)", another key word is "documentation". I treat "documentation" as if it were source code, and as a result, it corresponds to the real system directly -- I treat it as a "paper computer".

By doing this, there is no "process". Note that "use cases/business rules" and "data fields (data points)/technical approaches" (I believe "implementation strategies" is too difficult to say quickly in meetings, so, I traded the phrase with "technical approaches") are the top-most level of the structure ("architecture") of a system.

Note that we can talk some "technical approaches" to users. All those are key words in software support/development.

Also note that "technical approaches" includes technical items in "use cases/business rules", for example, "transactions", "newwork (web services, report server, eim, remote)", "security", and "audit" are "use cases/business rules".

Tuesday, February 10, 2009

the wording I am using: "use cases" and "data points"

I know I change a lot, and I play with words a lot.

Now, again, I am changing the wording of the top level thinking “formula”. They are "use cases" and "data points".

Yes, I am putting “rules” and “implementation strategy” back to where they belong – low level.

I changed “common data” to “data points”, because “common data” is too abstract to users. I want the wording easy to understand, intuitively.

Also, “points” kind of intuitively leads people into details, closer to implementation. This is especially true when you use "screens" for "use cases".

I guess I will keep using them for a while.

Now, you may say, why, why are you paying attention to the wording so much? It is important. Without a good wording, you will not say them that often, then, you will be misled, or, worse, you will mislead. Saying something, saying something aloud, saying something aloud in front of a team, are a very powerful action, and that action definitely affect your other actions and behaviors.

Monday, January 26, 2009

Use "rules" and "non-typical implementation strategy" to replace "use cases" and "common data" respectively

Use "rules" and "non-typical implementation strategy" to replace "use cases" and "common data" respectively

For a technical team, the concept (or “wording”) of “rules” is better than “use cases”, because “rules” is more technical. Users are more attentive and more tolerate to details. It is more "cut to the chase". Note that I correct myself of my previous blogs. Now, I believe the concept of “rules” corresponds/replaces to “use cases” (for technical teams), instead of replacing “common data”.

For a technical team, the concept “implementation strategy” corresponds/replaces the concepts “common data”.

Note that “rules” and “implementation strategy” are more “mixed”: “rules” has “common data” factors, and “implementation strategy” has “use cases” elements. However, their respective “starting point" is “use cases” and “common data” respectively.

I know, it is very confusing, but it is actually simple in action: for a technical team, always have a list, with “rules” and “implementation strategy” as your cheat sheet, then, you will be safe and can proactively respond everything.

Just one more bit of details. Usually there are too many details, too many “implementation strategies” – i.e., the list can be too long to be effective as a “cheat sheet”. So, the secret is to scan through the common, usual, routine “implementation strategies”, and, identify those uncommon, non-typical ones.

You may ask, what are the criteria of “common” and “uncommon” (i.e. “special”, "non-typical") ones? Easy, the 90% rule (I know, usually people say 80% rule – but in technical areas, we need to be tougher).

Of course, to pay attention to the non-typical ones, you have to master the common ones first, even most of the time, you do not need to do them yourself, since you only pay attention to the non-typical ones – it sounds like a paradox, but that is the secret of delegation or helping others to help yourself – that is actually your value as an experienced professional.

Saturday, January 24, 2009

thumb typing is good for IT professionals

Diagrams and spreadsheet are bad, plain text is good, and therefore thumb typing is good for IT professionals

The key is, drawing is not that important. If drawing is really that important, than, thumb typing is useless, we neither have to use tablet, or, paper.

I find that drawing is not that important. Diagrams are totally overvalued. After all, the general computing thesis is that what you can describe, you can realize it in computing, it is not that what you can draw or clicking. For software engineering, structure English (or structured any natural languages) is the king, not diagrams, not spread sheet. Whenever you find your documents are mostly diagrams and spread sheet, you should treat it as a red flag, something is wrong.

It took me so long for me to realize the importance of thumb typing. I will strongly recommend to everybody in my family to begin to use thumb typing.

I know young people have been doing the texting for a while. However, I am not talking about texting. I am not that generation; I am not interested in texting. Frankly, I believe it is simply temporary anti-culture, combined with the temporary limitation of computing devices – I am talking about those texting abbreviations – BFF, best friend forever, what a heck!

Even in thumb typing, I prefer the whole wordings. To me, using not absolutely common abbreviations is a sure indicator that the author is inconsiderate and therefore not professional, period. I know, this does not mean too much to a teenage. I totally understand, believe me, I was young before also :-)

I also know a lot of business people have been using PDA for a long time. However, I am not sure most of them are using them to take notes.

As a result, although it took me for a while to plunge into thumb typing, I believe I am still among those not too many people who have recognized the value of thumb typing and are doing it seriously.

[OK, after I wrote the above, I googled thumb typing, and I found a lot materials, see below. Some of them were written in 1999 -- it means that I am soooo late! However, I can still say that those are pioneers. From what I have observed, I am not that late to use it as an everyday routine notes-taking device to replace paper-based notes-taking

It says a lot of things, but all I feel I really need to know is the fact that "we can": it is possible to use thumb typing to think-write, and there are two other tips:

a. Note the left thumb holding Num can give you a Uppercase for the right thumb.
b. When typing number, left thumb holding the Alt key, and right thumb push the number keys.

---Just type the quick brown fox jumps over the lazy dog.

---I will try more aggressively to be able to do blind/touch typing with thumbs.

Also, I want to share that, in the beginning, on blackberry, I just send myself emails as my notes. Very soon, I found out that it is a bad idea. I cannot find my notes! They are buried in those emails.

Now, I am using “tasks” to take notes, and periodically “select” and copy them to emails as a backup. Of course, now, “tasks” is my first icon on my blackberry.

There is another thing that sucks. The company has a group security policy: the blackberry locks itself after several minutes. I have to enter password to unlock it every time I need to takes notes. I tried to ask them to make the timeout longer, but you kow the result, Sigh.

For blind/touch typing, I am not sure whether I need a physical keyboard or a simulated one. I am using an old blackberry. The new blackberry model is like an iphone, with larger screen, but a simulated keyboard. I do not have the new model, I guess it is a blessing for typing.

Thursday, January 15, 2009

thumb typing all the time -- on blackberry for to-do-item notes

Thumb typing, or, thumb-and-index typing, is not that difficult. You can do that pretty fast after a few hours. The problem is the software – there is no auto-spell checking, and no auto-correction in blackberry (perhaps I missed something).

Thumb typing is faster than writing pad and real time recognition, but I guess we need both, because we still need drawing. Mouse drawing is not good. Touch screen is useful here.

Anyway, thumb typing is worth it. I know, I am slow to pick it up, but I will begin to really do thumb typing on my blackberry to make notes, instead of using pen and paper.

This means I will use thumb typing all the time. Note that I have electronic notes that I take in VIM (not MS Word, since VIM also have spell checking – it does not have auto-correction – or I should say I am not using that, I am not user whether VIM has auto-correction or not). Those notes are detailed, and have a lot of content – that is why I use electronic notes for them long time ago. I always laugh at my college who are still using papers to do those kind of notes. It is amazing, a lot of them still doing that, make notes on paper!

However, I have been using paper doing the to-do-item notes all the time. This kind of notes, they are short, just a reminder. Also, they must be extremely portable, not limited on computers.

Now, I am going to move this to blackberry. I cannot draw anymore, but for those to-do-item notes, I do not do drawing anyway.

Note that I blog this, because I believe this kind of thing is important for computer professionals. Too many computer professionals are too behind on using computers!

Sunday, January 11, 2009

Personal Paperless Evolution: Use more laptop, printing pdf on paper, and thumb typing notes

As a new year resolution, starting now, I will always use electronic notes, instead of marking the notes on the margin of paper materials -- this means paper material will officially disposable at any time.

Today, I cleaned up my paper materials, there are only a few of them, I will gradually move the notes to electronic notes. They are not critical, I have already been doing this, it is just a "clearing house" activity, I guess.

More explanations:

1. The key of doing this is to use my labtop more often (the battery is OK, I just need to stop my bad habit of prefer reading paper materials and making notes on them). When I travel or at home, I will use my blackberry and my kindle more often -- even when I have to use paper material (OK, I still prefer reading on paper than on computer. Kindle is fine, but Kindle cannot handle pdf), at least I must use thumb typing on blackberry or kindle to take notes.

2. As mentioned above, Kindle is a big disappointment to me, I was hoping it can help me to realize the "paperless revolution" in my "home-office" and my personal life. It did not. Now, I have to re-do it by an evolutionary approach.

Now I know, a "reader" should be have larger screen (perhaps by folding it or better bending it without the middle dividing line) so that it can handle pdf without any reformatting. It must be touch screen with stylus support to avoid thumb typing. Also, it should have a fold keyboard for typing. In short, it is a tablet computer with an electronic paper screen.

I almost regret of buying Kindle, I should have gone to iLiad 1000, which has a price tag close to $900. However, the screen of iLiad is still small and not convenient for typing (does it support external key board? not sure); futher, it does not have color, which is important for some business documents. Also, it does not support video.

This leads to a question, why epaper screen? Because it is easy to eyes, and it saves battery -- both are not that important for business documents.

So, after all, I guess for pdf documents, let's keep it in labtop (or tablet - if you have it); for "ordinary" books, use kindle. So, I guess my decision on Kindle is an OK one, although Kindle is indeed disappointing -- the technology is slower than I expected. As a result, we need to adapt to the current status of the technology. Printing pdf on paper (sorry, trees, we still need to cut you!), and thumb typing.

Summary of 2 "blocks" and their sub-blocks

Summary of my previous summary

I will do things according to the following 2 "blocks" and their sub-blocks:

1. “Common data”: this leads to "OUR" Object, UI, Relational, and their mappings -- OR mapping and OU mapping.

2. "Use cases": this leads to UI and "facade" -- in Siebel, "business service" and "workflow", they then lead to networking (which in turn includes 4 parts: web service, EIM, reporting, and remote), logging, and security.

Saturday, January 10, 2009

summary and new start

I reviewed all my blogs of the past a few years – that is one advantage of blogging over private notes: it forces you to face your history of ideas! I found that sometimes I contradicted myself, and I debated with friends, and I changed my mind. The important thing is not the conclusions, but the arguments, the process of arguing, and more importantly, the understanding, insights, and friendship.

OK, another immediate contradiction of myself: Now, I want to have a summary ("conclusions"!), so that we can invent more in the coming year(s).

1. "Philosophy": science and technology, regardless of modern and ancient, are one. The saying that modern science is totally different from ancient science and from technology is a myth. At a high level, they are the same. We can and should use science-thinking ("scientific thinking) in technology, and vice versa.

2. "Time" -- the basic sequence

“Analysis/Design/Coding/Testing” corresponds to “Problems/Theory/Solving Problems using the theory/Testing”.

Note: in "problems", you have both "common data" and "use cases", not just "use cases".

3. "Space" -- the basic structure (or “architecture” -- but I prefer small words in science/logic, instead of big words in engineering)

“Common data” (this leads to "OUR" Object, UI, Relational, and their mappings -- OR mapping and OU mapping) and “use cases”. "Use cases" leads to UI and "facade" -- in Siebel, "business service" and "workflow", they then lead to networking (which in turn includes 4 parts: web service, EIM, reporting, and remote), logging, and security.

Note: “Common data” is a “new” phrase I have begun to use. I discussed/debated a similar topic sometime ago in my blogs, “data model vs. domain model”. I believe “data” is more user-friendly and, well, manager-friendly. I add “common” before "data", to emphasize that the concept has inherently an element of “design”, “structuring”, or well, “modeling”. In short, it is a compromise between “data model” and “domain medel” -- but I still do not like “domain model”: it is too programmer-oriented, not user-friendly. Users are totally confused when you use the word “domain model”, but they know, I mean, really know, “data”. Note that it is not just wording. Inherently, “domain model” has too much “technical” baggage. "Domain model" concept belongs to a more limited technical context, instead of a top level concept that can be used together with the concept of “use cases”.

A special note: I notice that there is a mapping between "time" and "space", or, a common spot in both "time" and "space": the "use cases" (UIs and facades) maps to "testing". This is understandable: the so-called "space" is the basic structure for software, which must follow the basic structure of human logic, which is the so-called "time" all about.

4. Technical team dynamics (or "team culture" or "team leadership" or "team moral" -- pick the phrase you like):

a. Basic estimate of human capability, science, technology, and education: a good high school new graduate or an average college new graduate can start to work productively within two weeks on any computer technologies, within a good team and with good mentoring.

b. Rotating/pro-vertical (avoid single threading) and lean (cut self-enforcing communication overhead)

Single threading is bad. When project is late, you cannot put more people on the bottlenecks. Worse, it damages team moral. It is the basis of unfairness, laziness, and even outright blackmailing. Based on my observation, it is the root cause of more than 80% of problems in IT.

The irony is that out of the 80%, more than half of it is from the “solution” of the problem: heavy communication overhead. It is a self-enforcing overhead -- people first compartmentalize the team, create the hunger for information, and then, they “coordinate”. Gradually a lot of people cannot do things anymore, they only “communicate” and “coordinate” while they are devaluing the whole profession into production-line work, with extremely heavy, costly, error-prone, and misleading overhead. The double irony is that this "solution" actually aggravates the "single threading" problem.

We can do better. We need, can, and should be more effective by being lean. However, to do that, we need to find a better way to solve the single threading issue.

Observing other professional fields, we can find that the real solution is the opposite of the current “solution”: instead of limiting, we expand the scope – every developer should at least have 2 or 3 “fields” (remember that even an average college new graduate can pick up things in a field within two weeks!), then, we can rotate the fields within the team. Also, when we do “division of labor” in a certain project, we should always prefer “vertical” division (end-to-end of a vertical slice), instead of “horizontal” layering division (e.g., one person in UI, another person in Business layer).

c. “Document driven”, meeting minutes, source control, and source control friendly format. The essence of the “document driven” is that all documents, including meeting minutes, user oriented documents, and even “private” or “personal” notes, should be treated as “source code”.

There are two big challenges.

One is that people want to keep some “private” and “personal” notes – this will be resolved by rotating mentioned above.

Another one is “source control friendly format”: some user oriented documents, for example, “requirements” or “user manuals”, are not easy to be put into version control friendly format. Currently, too often they are in MS Word together with spreadsheets, which both are ugly and horrible for version control friendly minds! – This is actually pretty easy to resolve in web technology (wiki etc). I hope the corporate world will catch up on this quickly.

d. “Process” must be simple, and not arbitrary. In more detailed terms: it must be in place before projects. Preferably, its wording should be consistent with common usage in the industry, like "analysis", "design", "coding", "testing", “use cases”, and “common data”.

Friday, December 12, 2008

A Paper on TDD and Karl Popper

G. Concas et al. (Eds.): XP 2007, LNCS 4536, pp. 253–256, 2007.
© Springer-Verlag Berlin Heidelberg 2007
Epistemological Justification of
Test Driven Development in Agile Processes

Francesco Gagliardi
Department of Physical Sciences — University of Naples Federico II
Via Cintia — I-80126 Napoli, Italy


In this paper we outline a methodological similarity between test
driven software development and scientific theories evolution. We argue that
falsificationism and its modus tollens are foundational concepts for both software
engineering and scientific method. In this perspective we propose an epistemological
justification of test driven development using theoretical reasons
and empirical evidences.


Software Testing; TDD; Agile Programming; Epistemology; Falsificationism;
Modus Tollens.

1. Ghezzi, C., Jazayeri, M., Mandrioli, D.: Fundamentals of Software Engineering. 2nd edn.
Prentice-Hall, Englewood Cliffs (2003) (ISBN: 0133056996)
2. Dijkstra, E.W.: Notes On Structured Programming. 2nd edn., T.H.-Report 70-WSK-03,
Technological University Eindhoven, Department Of Mathematics, The Netherlands (1970),
3. Popper, K.R.: The Logic of Scientific Discovery (Translation of Logik der Forschung).
Hutchingson, London (1959)
4. Kaner, C., Falk, J., Nguyen, H.Q.: Testing Computer Software, 2nd edn. John Wiley and
Sons, Chichester (1999) --------------
5. Bach, J.: What software reality is really about. IEEE Computer 32(12), 148–149 (1999),
6. Pettichord B.: Testers and Developers Think Differently. STQE magazine, vol. 2(1), pp.
42-46 (2000), (Url:
Type=ART&ObjectId=506) --------------------
7. Coutts, D.: The Test Case as a Scientific Experiment. (url:
sitewide.asp?ObjectId=8965&Function=DETAILBROWSE&ObjectType=ART) --------
8. Edwards, S.H.: Using software testing to move students from trial-and-error to reflection-inaction.
SIGCSE Bull. 36(1), 26–30 (2004),

-------------------------------------------short quotes

It is proved that an ideal test suite exists for any program, but unfortunately it is
also proved that there is no constructive criterion (i.e. algorithm) to derive a test suite
satisfying that property.

The incomputability of ideal test suite is the primary cause of the existence of several
empirical criteria to define the test suite such as category partition, boundary
analysis, special values detection, an so on.


This firmly links software development to methodologies of natural sciences and to
epistemology, in particular to theory of falsificationism by Popper. And we need to
adopt this perspective if we want to increase the comprehension of methodology and
practice of software development.

In this perspective TDD is the unique ‘scientific’ methodology to develop software
systems because it uses the falsificationism and embraces continuous testing.

Summarizing, the successful of TDD in agile processes is based on the rediscovery
of scientific method.

Sunday, November 23, 2008

Read, baby, read

It is toward the end of the year, holidays are approaching!

Looking back, since the end of last year, I have been carrying out my resolution of the journey of going out the box of the software process (being it UP or TDD), and back to the richness of life.

More readings, not just outside the readings of MS technologies, or, Siebel technologies, or, Unix, Internet. Much more than that. Reading is nice, it is like watching TV or go to theaters or movies. Reading does not need to be a burden, even for non-fictional readings.

Software engineering is not a "field" or "discipline", it is an inter-disciplinary area. Do not let those process fool you. You can get more useful techniques in philosophy of science (and other areas) than in those theories of software processes.

Expand our readings. Read, baby, read.

Another site that with similar recommedatoin of readings.

It seems that "testers" are deep thinkers, shame on "project managers", "analysts", "architect", "developers", and "programmers"! -- of course, I am NOT a tester, so, I do not really care about the testing specific things.

Systems Thinking
Quality Software Management, Vol. 1: Systems Thinking
1991, Gerald M. Weinberg
An Introduction to General Systems Thinking
1975, Gerald M. Weinberg
Secrets of Consulting: A Guide to Giving and Getting Advice Successfully
1986, Gerald M. Weinberg
General Principles of Systems Design
1988, Gerald M. Weinberg, Daniela Weinberg

Tools of Critical Thinking
David A.Levy, 1997
Exploring Requirements: Quality Before Design
1989, Don Gause, Gerald M. Weinberg
How to Solve It
1945, George Polya
How to Read and Do Proofs
1990, Daniel Solow

Ways People Think and Learn
Cognition in the Wild
1996, Edwin Hutchins
Thinking and Deciding
1994, Jonathan Baron
Lateral Thinking: Creativity Step by Step
1990, Ed De Bono
The Social Life of Information
2000, John Seely Brown, Paul Duguid
Things That Make Us Smart: Defending Human Attributes in the Age of the Machine
1993, Donald Norman

Scientific Thinking
The Sciences of the Artificial, 3rd Ed.
1996, Herbert A. Simon
Conjectures and Refutations: The Growth of Scientific Knowledge
1992, Karl Popper
Theory and Evidence: The Development of Scientific Reasoning
1996, Barbara Koslowski
Abductive Inference: Computation, Philosophy, Technology
1996, John R. Josephson, Susan G. Josephson
The Pleasure of Finding Things Out
1999, Richard Feynman
Science as a Questioning Process
1996, Nigel Sanitt
Administrative Behavior, 4th ed.
1997, Herbert Simon

Software Testing
Testing Computer Software
1992, Cem Kaner, Hung Quoc Nguyen, Jack Falk
Software Testing: A Craftman's Approach
1995, Paul C. Jorgensen
Bad Software: What to Do When Software Fails
1999, Cem Kaner, David Pels

Example of an Implicit Specification
The Windows Interface Guidelines for Software Design
1995, Microsoft

Teamwork and Communication in a Technical Team
Quality Software Management, Vol. 3: Congruent Action
1994, Gerald M. Weinberg

Saturday, October 18, 2008

Karl Popper and Software Engineering

Karl Popper and Software Engineering

I believe now -- 2008, in software engineering, a pure “practitioner” point of view is hitting a dead end. We need an “academic” or conceptual revolution. The revolution is to throw out the inductionism in software engineering. Karl Popper did it in Philosophy of Science. We need to expand that to software engineering.

In my previous blog, I proposed the following theory:

“Analysis/Design/Coding/Testing” corresponds to “understanding problems and anomalies/theory/solving issues using the theory/ experiment-test the theory”. Now, I am going to make it even simpler: “Analysis/Design/Coding/Testing” corresponds to “Problems/Theory/Solving Problems using the theory/Testing”.

You may say, it is just a word-game. I say it is a huge, revolutionary change. Now, "analysis" is to understand problems, not “gathering” those so-called “requirements” anymore! In meetings with users, when we ask “what is the problem” and “how do we solve the problem”, it is always more effective than asking “what are the requirements”! In fact, we all know that “requirement gathering” is the source of all problems in software development. Now, we know why – OK, pun is intended here!

I googled about it (Karl Popper and Software Engineering). Surprise, Surprise, I am not the only one! A deep thinker in MS!

The following is a short quote (please go to the above URL for a complete read, it is fun!):

Testing as a science
Computer software is similar to a scientific hypothesis; both are inherently fallible. The basic framework of the software debugging process is analogous to the trial and error practice that advances scientific hypotheses. Computer software is simply technical conjecture. Test engineers attempt to refute the assumption of flawless software through rigorous tests mostly designed to prove that defects exist (falsification).

The falsification process sharply contrasts with data-driven approaches such as induction and deduction, which attempt to justify validity based on the repetition of facts. Justificatory approaches mostly attempt to support claims through confirmation. Confirmation primarily endeavors to validate the error-free uncertainty by using specific data that demonstrates proper functionality. This approach clearly only proves that software functions correctly under certain conditions; it does not prove that the software is error free.

Sunday, October 05, 2008

Problem Solving Areas (PSAs), Problem Solving Lean Process (PSLP), Analysis/Design/Coding/Testing and General Problem Solving

Problem Solving Areas (PSAs), Problem Solving Lean Process (PSLP), Analysis/Design/Coding/Testing and General Problem Solving

Ya, you may say that all this blog is about names; but good names indicate the maturity of a theory.

1. I changed the 8 core "techniques" into 8 core "Problem Solving Areas (PSAs). See the 8 PSAs at the bottom of the blog.

2. In this blog, I changed the process name into “problem solving lean process”: it is a lean process, with problem solving in its focus.

3. I retreat from my noble fighting with “Analysis/Design/Coding/Testing”; now, I believe it is a paraphrase or a parallel mechanism of the “problem solving process”: thinking about problems, a theory, solve issues in the theory, and experiments for the theory. By doing this, I have “saved” the analysis/design/coding/testing, and, more importantly, saved myself from craziness! As a result of this realization, on the one hand, I am now a big fan of the micros-waterfall, because it is the same micro-mechanism of any problem solving processes. On the other hand, I do believe the micro-waterfall is flexible, any step of this micro-waterfall reflects the whole – for example, in analysis, there is testing already.

---------------------------8 PSAs

1. "OR mapping" (including "UOW", Unit of Work, equiv. to Siebel's BO)
2. ("O") "entity": rules in entities, rules engine(******), scripting in entities
3. ("U") "Web2 UI": ("ActiveX" and "Ajax")
4. "Data Binding" ("OU mapping")

5. "transaction" (and non-transaction, with "facsades" and "workflows") (business service and workflows, tasks. Also here: "unit testing" ******)

6. "networking (at application level, as "connected applications")"
--i. async and sync EAI (background) "messaging" for integration and need "user push notification" (activities or an area on every screen)
--ii. batch EIM
--iii. offline web client
--iv. reporting server
7. "logging (that is runtime-configurable)"
8. "security"("authorization" and "authentication")

Saturday, August 02, 2008

Seven square number of Problem-Solving-Items (PSI)

Seven square number of Problem-Solving-Items (PSI)

What are problem solving items?

As a first-level approximation, they are requirements + “architecture” requirements.
However, they also include technical design items, source control items, testing items, support items. They include all problems. For details why I put all of them together, please read my previous blogs.

Of course, we all know that our corporate world does not like the word “problems”, we find all different words for it, “issues”, “challenges”, etc. I do not like those other words, and I like the word “problem” – I like all the honesty, history, weight, and power that go with it. However, I do understand and appreciate the needs from the corporate world; so, I add a positive word with it, and add an “unit” word, hence the phrase, “problem-solving-items”.

What is the “7-square”?

Psychology shows that the short memory of human’s brain can only handle 7 items. The optimal is half of it, around 3. That is the reason that when we speak or write, we use 1, 2, 3, or 4, not that often we use more than 4. If we really need to, we re-organize them, make it two levels, 1, 2, 3, under each item, we have a, b, c.

You may believe it is trivial. I do not believe so. I believe the universe is “consistent” or even “anthropic” – see . As a result, I believe those 1, 2, 3 things are “consistent” with those “thesis”, “antithesis”, and “synthesis” in Hegel’s philosophy.

I apologize to drag you into this physics-philosophy-psychology indulgence, that is my weakness; but I am also very practical and pragmatic, so, let’s be back to the real business.

IT (Information Technology) is about handling large amount of information. Its complexity comes from the large amount of seeming unrelated items. As a result, we have to maximize the capacity of human minds. In short, we have to use 7, not 3; further, we have to use two levels, instead of just one. And use two levels all at once – we have to treat the two levels as if there were one, i.e. we have to treat 49 items all at once, and not make the two levels too restrictive.

In short, next time you think about IT, just think about “49 PSIs”!

There are no categorical differences between the "problem solving" in sciences and that in engineering disciplines.

Further, the best minds, or, at least best documented best minds, due to the very nature of basic sciences, are not in engineering, but in basic sciences.

As a result, in addition to Lean literatures, I will resume my love for reading philosophy and history of sciences.

Friday, August 01, 2008

Problem Solving Logic-Centric Lean Process (PSLCLP)

I believe I am in a creativity storm.

I believe I found out the problem of all “software processes” and “IT project management”. It is the “inductionism”, we need a “deductionism” paradigm shift just as Karl Popper did in philosophy of science.

I believe the key is indeed “requirement gathering” – the starting point of the waterfall, it is wrong, everything is wrong.

We need to replace it with “problem”. The whole thing should be “problem solving”.

The name of the process should be Problem Solving Logic-Centric Lean Process (PSLCLP), I know, it is long. The key is to remember “problem solving”.

testing, testable, Lean, science, and Occam's razor

The effort to apply Lean to software makes me think, very hard.

It is about the core waterfall, by that I mean: analysis, design, implementation, teat, and deployment.

I want to eradicate them. To me, those concepts are simply a pretentious way to say that we are working toward a new system.

TDD is good in the sense that it has already started to break this core waterfall: it says that let’s put aside “analysis, design, coding, and testing”, let’s treat everything as testing – that certainly is disruptive and revolutionary!

Of course, the problem is that, this revolution concept is mixed and therefore buried or at least eclipsed by other concepts like “stories” -- but that is another story, of course.

Then, the concept in philosophy of science hits me: “testable".

Usually we believe software engineering is engineering, and therefore we use metaphors from other engineering. However, perhaps we should think about software more in the terms of science – philosophy of science, logic of science, and logic-psychology of problem solving?

After all, modern engineering is based on science; and recent science is more and more “big science”, i.e., engineering based science – science of man-made.

After you cut the fat of the waterfall, we can see some many new things, or, new old things!

More about why Lean and science: heard about Occam's razor ( ). Science is the leanest enterprise by human being!

some blogs I got when I googled ockham razor and six sigma:

Wednesday, July 30, 2008

How to apply LCLP on Siebel CRM development

How to apply LCLP on Siebel CRM development

It is logic centric; then, what is the logic?

It is the 8 core techniques, of course – as promised :-) it is all just about re-packaging.

A little further now though. Because of LCLP, we can justify that we jump on those 8 core techniques directly.

Because we do it directly, we can immediately split those into a few dozen items.

Also, “techniques” are too “doer-centric”, so, let’s change it into “special design areas”.

So, the result of applying LCLP on Siebel CIM development is that we directly jump to a few dozen “special design areas” – the groupings are still those 8 core techniques, of course.


---- LOV, constrained LOV, static and dynamic, how to make greatParent-parent-child LOV.

---- State model

---- ldap/asi adapter: because Siebel does not use a generic database account, so, even we use ldap/asi, we have database accounts. The adapters will help the sync.

---- …………

LCLP and cheese without fat

This is basically a comment I left on:

Based on the comments of previous blog, I have to change ALP (architecture-centric Lean Process) to LCLP (Logic-Centric Lean Process, note that I cannot use LLP) .

Comparing with Lean, I feel TDD or agile is not enough, even I am willing to go along with TDD, since it is the closet we have for now in software industry.

I would like to remove all the concepts that lead to waterfall. For example, the so called "requirement gathering". It implies a step in the waterfall. Requirement gathering is simply communication with users. I deny the concept of "requirement gathering", because there is no special ways in "requirement gathering" -- comparing with "design" (or, a better wording, "logic"), it is vague, and the faster you can jump on to design, the better. "Requirement gathering" is simply an inefficient or less-than-optimal design. All this means, let's talk about screens flows, screens layout and data entities and related business rules, security, notifications, logging, installation, source control, etc. directly and as technical and specific as users or audience can understand, and with no certain orders other than the necessity of the logic.Cut to the cheese, lean, it is the cheese without fat.

How do you like that, cheese without fat :-)

--------------------------A notes on July 31
I know I go to extreme sometimes, that is my weakness – but that is also my strength, by doing that, I get to the bottom of things quickly.

I am trying to eradicate all waterfall concepts, even just some traces of it. This exposes the needs for order.

I said that there is “no certain orders other than the necessity of the logic”, the tone is certainly mostly negative. After thinking about it, I know I need to emphasize the positive side of it also.

The key is, after removing the waterfall, we can see the orders introduced by the “necessity of the logic” more clearly, and therefore, can follow those orders, and do things more efficiently and more effectively.

Actually, that is the whole point of removing waterfall. Waterfall ordering eclipses the real orders: it oversimplifies things to one dimension.

This leads to a very counter-intuitive statement: the most harmful situation of waterfall process is actually for large projects, when you cannot simplify things like that, even using many iterations, it simply just does not work.

Typically, in a mid-to-large sized project, there are a few dozen of “dimensions” (you can “group” them into two or three groups, but those groupings are vague). Experienced team leads or good project managers pick up those dimensions quickly, and follow them up through the whole project.

Along each dimension, there are pretty clear patterns, and among those dimensions, there are also some vague but useful patterns. Ya, those dimensions and patterns are “technical”, but good leads or project managers do not only know some “project management” terminologies, they are technical enough to know those dimensions and patterns.

Sunday, June 22, 2008

How to lead a project: the opposite of “waterfall” is “architecture-centric”, not “iterations”

How to lead a project

We must lead a project around the concept of “architecture”: “architecture design”, “high level design”, or “design”. However, note that because “architecture” has too much overloaded meaning, I use the wording “high level design” whenever I can.

The waterfall concept, that we do software in the order of requirement gathering, analysis, design, coding, testing, deployment, support, is simply wrong. However, the opposite of “waterfall” is “architecture-centric”, not “iterations”. "Iterations" are simply small waterfalls. Many small wrongs do not correct one big wrong. Further, the wrongness about the waterfall concept is not its frigidness; it is its emptiness and pretentiousness – it is its tendency to encourage people to think non-senses – things that cannot add values to the project (its frigidness is actually fine if you can deal with its emptiness and pretentiousness). We need to replace the “waterfall” concept with the concept of the “architecture” and the “dependencies” in the architecture. Note that “architecture” and “dependencies” are very rich concepts.

1. The project is within a bigger architecture.
2. The project itself has its own internal architecture, or, high level design.

3. The high level design of the project has the following parts:

a. “Functional” (in Siebel, we call it “configuration”). It includes OR mapping and OU mapping (i.e. ORU mapping) and rules. Here we have two important notes:

(i) functional is part of the architecture. Some people may say that the “functional part” is not part of an architecture design (i.e. high level design). That is not true. OR mapping, OU mapping/binding, and rule engine are certainly parts of an architecture design (high level design); the usage patterns of ORU mapping and rule engine are also part of an architecture design (high level design). It is actually very easy to see -- once we stop seeing things through waterfall glasses, and begin to see things through “architecture-centric” point of view: there is no “requirement gathering” or “analysis”, there are just vague designs, and designs are simply vague or unfinished systems.

(ii) UI must be easy for automated testing. Some people may say that testing is not part of a high level design. Wrong! Testing is part of the architecture, just as logging is part of the architecture. To ensure that, automated UI testing must be part of the early development.

b. “Architectural” (in Siebel, we call it “Integration and conversion”). It includes “transaction” (in Siebel, it is BS and WF), “networking (in Siebel, it includes EAI/web service, EIM, offline client, reporting server), logging, and security. Again, an important note here is that automated unit testing is part of a high level design. Here, automated unit testing is included in the concept of “transaction” -- a good high level design should always make it automated unit testable at transaction level.

c. “Admin”: this can be treated as an extension of b.

(i) Source control is an extension of automated unit testing; this is because it is through automated unit testing that source control is “attached” to the architecture and therefore must be treated as part of an architecture. Using source control is part of a high level design.

(ii) In Siebel, when we set up development environment; we need to go through hoops in setting up “offline client”.

So, here “admin” includes “deployment admin” and “system admin”. Note that by getting into "system admin", I begin to accomplish my new year resolution that I will consolidate my knowledge into a complete vertical stack, bottom up, from hardware, OS, to browser scripting.

Thursday, June 12, 2008

The differences between ALP and TDD

The differences between ALP and TDD

ALP is architecture-centric. It requires finish architecture design in the first a few days (for most projects), or, a few weeks (for really large projects).

ALP treats “stories” or “use cases” as vague materials. It directly focuses on three things: (a) the OR mapping -- key concepts (glossary) and (logical) database tables-columns, and (b) the OU mapping (or binding) -- key concepts (glossary) and screens and buttons, and (c) rules – key concepts, screens-buttons, database tables-columns, and business logic, in the contexts of typical scenarios.

You may say that TDD can do those things also, or, further, good TDDs do those things also. That is true. The same can be said to RUP. That is why it took me so long to make this explicit. However, I feel it is important to make it explicit, all good processes are alike; however, there are too many bad TDDs and bad RUPs.

Friday, June 06, 2008

Architecture-centric Lean Process, ALP

The name “Team Capability Model” is not well-known, so, it seems that we have to keep using the word “process” – so, here is a new name for it: Architecture-centric Lean Process, ALP.

In addition to the content of the concept, I like the name also. It has some tensions built-in (“Lean” usually means less “architecture”), and, it sounds official and formal.

Sunday, June 01, 2008

An Architecture-Driven Team Capability Model

After a few years in written form in my notes and in the blog, I believe the “8 core computing techniques” is mature enough to have a more official or formal name, so that it can be used in any documents in the “corporate culture”.

I put some thought on it; the best way for it to "break in" the corporate culture is to use it as a "capacity model" (note that it is not just a “process” -- it has much more content). Because it is all about architecture, so, it is an architecture-driven capacity model.

1. A person’s or a team’s CAPABILITY is determined by its ARCHITECTURE CAPABILITY.
---- I know, this is a peculiar usage of the word “architecture”. Please see below.

2. A person’s or a team’s ARCHITECTURE CAPABILITY equals:

“Business knowledge”
+ “Technologies”

All of those are elements of architecture:

(a) “Business Knowledge” simply means (i) OR mapping, (ii) OU mapping, and Rules.

(b) “Technologies” simply means (i) Transactions (note: capable of unit-testing at this level is like logging, is part of the architecture, it is NOT just a feature of a “process”), (ii) Networks, (iii) Logging, and (iv) Security.

(c) “Processes” simply means the “order of doing things”, based on the DEPENDENCIES in the architecture.

Tuesday, May 13, 2008

Lean vs UP or TDD


I wrote: if you know those (8 core) techniques, you surely already know those "processes". "Processes" is simply experiences you gain when you use those "8 core techniques".


I wrote: we should refuse to talk about "software proceess" -- we should use lean process in software directly, instead of UP or even TDD.

I am now more convinced.

The key is, being “lean” requires that we must do things as close or direct as possible to where the real values are happening.

The “8 core techniques” are where the real values are happening.

So, we should replace all those TDD or UP with 8 core techniques, and call it “lean”.

You may ask, even we know the content of those 8 core techniques, we need to ask: what category do they belong?

The answer: they are architecture elements.

So, a “lean” process means that we deal with architecture elements directly, iteratively and incrementally – which means, as long as you are making progress and you know about it and you can provide hard evidences for it, then, in whatever order, do whatever is necessary.

Lean process is very different from TDD and UP. Obviously, it is “heavier” then TDD, because it has a “default architecture”, and is totally architecture driven.

Also, it “includes” or “tolerates” or "consumes" the concepts of “requirements”, “use cases”, etc., because all of those concepts are simply iterations toward “screens and buttons” and “façade methods”. “Testing” is a real concept in “lean”, but that simply because testing is inherently part of the architecture, just like logging is. Note that “lean” is better, because now you always know the context, and know that it can be extremely flexible yet can always focus on the real result.

It also "includes" or "consumes" DDD – all those mappings (OR and OU) and related rules are DDD, or, you many say that DDD is the iterations toward OR and OU. Again “lean” is better, because you know the context, and you know it can be extremely flexible yet can always focus on the real result.

Saturday, May 03, 2008

Dynamics and ontology of blogging

I use blogging as a notes-taking mechanism. Doing it this way helps me to separate private details from the issues themselves immediately. It makes my thinking more objective; makes me continuously theorizing, and hence think continuously.

I blog on technical, medicine, business, and culture topics.

The above line is an example of "immediate theorizing". On the one hand, the above line reminds me that I have a few blogs; so, it serves the function of notes taking. On the other hand, it makes me think about the classification/ontology of blogging.

State of the handhelds

1) Why I am interested in it?

A new wave of computer innovation is coming, and handheld is a key component.

More specifically, handheld devices have reached to the power that they are more powerful than a PC when PC took off. GPS, mobile email and calendar, phone, camera, instant messaging, mobile web, mobile office productivity -- you think and get feedback and write down anytime and anywhere -- your brain is always active and more importantly, at peak level. Soon voice , handwriting, and imaging processing will enhancing them, then, we will have the equivalent of "office" for "general site automation".

To maintain a wide perspective, we should keep it in mind that there is an academic field, HCI (human computer interface), for handheld development. Also, there is a few high tech fields related to handheld: "put viewing devices on glasses",
"multi-touch screen", "voice, handwriting, and imaging processing", "natural language processing".

2) In general, there are two technologies: one is touch screen (palm, apple), one is physical keyboard (blackberry).

Note that perhaps we should talk about human needs first. However, from HCI point of view, human needs and technologies are a hen-egg relations. You may say that needs are the mother of all innovations. However, do no forget human body and behaviors are themselves "build-in technology systems"!

We put technologies first here, for a simple "shallow" reason: because there are only two of them for now, it is easier to introduce them, and we can use the concepts in the discussion of human needs.

3) There are many levels of input need. Depending on the technologies, you can have different separations. For easier thinking and comparison, we always try to force a "canonical" level system.

Note that we only talk about the "write" side; the "read" side is easy from HCI design perspective.

A. No-hand (driving on local streets): voice processing is the key here.

B. One-hand (driving in highway, walking in familiar area; this can further divided into left hand and right hand): I am very surprised that it seems that very few people think about this explicitly. This is a very important category; it should be buried in category C (below) or category A.

C. Two-hands but in constraint space or other environmental resources, by thumb typing, and handwriting. Note that thumb typing can do a lot – actually the draft of this blog was written by thumb typing. However, handwriting can do even more – it is almost a shame that after more than 30 years large scale use of computers, the main input method is still typewriter technology – the keyboard. It is possible that handwriting and voice processing will make breakthrough via handheld.

Note that this seems to be the major focus for blackberry. You can make a sub-division by slide-keyboard. For iPhone, it is between B and C.

D. Two-hands optimal. In this category, flexible or folding keyboard is the way to go.

Friday, May 02, 2008

Silverlight, JSON, and Entity Framework and .net Being the Primary Enterprise Platform

Silverlight, JSON, and Entity Framework and .net Being the Primary Enterprise Platform

JSON support is ready.

Silverligiht is coming.

Entity Framework: it is more complicated. Perhaps it is not as good as the java-port, nhibernate, just like springframework and other things (for one thing, EF uses code generation approach -- ugly thing, isn't it). However, the key is, the official, recommended architecture on .net is exactly the same as lightweight Java's.

Soon .Net will surpass Java/javascript.

However, the power of silverlight is that it is cross platform. I hope Linux world can take advantage of this -- however, regardless Linux/Windows, Java is now officially behind.

Silverlight marks the start of the end – now, .net can begin to claim that it is a superior platform. We are seeing a historical turning point.

A dominating platform that is not open-source based, it is very scary – it is not that I am brain-washed by open-source arguments, it is my real world many years experience: companies come and go, if you do not have source code, it is trouble.

You may say, that is only for application level. For system level, it is OK that you do not have source code. Actually, for many years, java VM is not open source. I have to agree.

However, I have to say that considering our dependency on computers, it is still scary; too much is at stake. I seriously believe that eventually the governments will do something to guarantee some healthy competition between open source and MS.

All in all, while I will continue to pay attention to Linux (and Unix in general), open-source C/C++, and Java, I have to say that perhaps we should welcome the fact that .net is going to be better than java (java is going to be the underdog), from now on, especially this accompanies that fact that M$ is adopting, encouraging, and enforcing a more and more non-mort culture.

I do not regret entering into .net -- after all those years, finally I can brag about it to my Java colleague.

You may say, is it even possible that M$ is enforcing a non-mort culture? I believe so. It is to the best interest of MS. It is called segmentation of markets. M$ will certainly keep its low end, mort market. However, it will also establish its high end market. The key is that its key selling point is that its high-end market is a continuum of its low end market. Another key point is that high end must lead low end – this is a technical necessity – it seems that M$ finally gets this, just recently: you can simplify a good system to make it easier, but it is a technical impossible that you “complexify” a bad system to make it good. So, more accurately, combining the two key points together: M$’s key selling point is that M$’s low end market is the continuation of its high end market.

REST and data

Here is my previous blog about REST:

I wish I could say that I wrote my blog before all of those:

Thursday, May 01, 2008

Use lean process directly, instead of UP or even TDD

Vikas's insight on lean process and TDD is wonderful. We need to apply "lean", "6-sigma", etc. to "software process".

Further, because there are so much baggage in "software process", I (tentatively) suggest that we should refuse to talk about "software proceess" -- we should use lean process in software directly, instead of UP or even TDD.

MS and Morts: and

My friend Vikas has a good post about a also-very-good-post about an interesting post (click the links, you will know what I am talking about!):

My comments:

1. It is indeed a tipping point, historical moment for MS!
Even it is too little too late, especially too late, almost 10 years late, in Internet time.

I do love silverlight though; it is definitely next-gen-ajax.

Because we have ajax-and-next-gen-Ajax (silverlight), although you may say MS's ajax is part of asp, but that misses the point: ajax makes traditional asp irrelevant. As a result, asp's MVC is less relevant -- Ajax/silverlight makes the previously-official-asp irrelevant, it gives a huge opportunity.

2. MS is distancing from morts -- for that, I love MS -- it is innovating.

3. is basically following -- I certainly do not mean to insult people; they are doing creative work now, and will surpass java soon.

Note that I am using as a concept, not a community, as a result, and NHibernate are certainly part of -- conceptually speaking -- but I am not sure whether all people in and nhibernate are in the community of "".

Saturday, March 15, 2008

Expand perspective in two directions: higher level (Siebel/Javascript) and low level (hardware/C/C++)

Expand perspective in two directions: higher level (Siebel/Javascript) and low level (hardware/C/C++) and therefore forget about all fat “processes”

A. I will be in Siebel space for a while, while this is happening, my plan is to have an integrated perspective of enterprise computing, from bottom to up.

Yes, this means I want to go back to the root of computing: the hardware. I want to pay more attention to handset, games, and, Linux. I want to refresh my Linux C/C++, and system administration and protocols, and then, get a little bit .Net on Linux.

I will keep using .net, because silverlight is so attractive; but I will pay more attention to Linux-based things, simply because they are open source, and I have an open way to hardware.

In short, for me, it is time to "merge" C/C++ (shell, perl), java, C# (VB), Javascript (perl) together – they are all C family anyway. Note that Siebel (and Ajax) will improve my javascript to a "serious professional" level. Also note that although I am gradually moving far and far away from perl and VB; however, because javascript is a scripting language (for Siebel, javascript is a server side scripting, exactly like perl!) and perl-like regular expressions are now in all modern languages, I can pickup perl anytime. As for VB -- as long as I keep C#, I can certainly re-pick up VB anytime.

By doing this, I will feel more “solid” when I do the programming in C#/Java/Javascript.

B. I will introduce Siebel techniques to spring:

1. OR mapping extensively: this is already in spring-hibernate, but Siebel's way adds more discipline and systematic taste. This also include OX mapping – xml to object mapping, and web services.

2. OU mapping -- databinding ideas: Siebel's databinding is strict and thorough.

3. Extensive use of rule engine. Again, this is not really originated from Siebel, but Siebel 8 makes it happen.

4. Event, security, transaction, logging etc. I will introduce some good ideas and practices as the usage patterns for spring's AOP.

Friday, January 25, 2008

new version of 8 core techniques and rules

I added marks (******) next to rules and unit testing.

Rules are important, and rules are the essence in OR mapping/OU mapping. There are two reasons:

a. Nowadays mappings are easy. OR/OU mappings are "assumed". They are just the "basics" (so, VB/C# guys, learn OR now! java guys, learn OU now!). As a result, the focus is "rule engine": the extent of "rule engine", and how to use "rule engine with those mappings". In short, nowadays, when you talk about rules, you already assume there they are based on OR/OU mappings.

b. More importantly, I noticed that users care much less about those "mappings" (a.k.a. "glossary", "domain model") than "rules". As a result, I feel it is better to use "rules" as the subtitle, instead of "mappings". I am effectively using a more "use-case-oriented" item to represent the "domain model" section -- it is not "fair" to the "domain model" section, however, this way is more effective for communications with users. Again, although "rules" has a lot of "use case" characteristics, I put it under "domain model" section, because it also relates to domain model very closely, further, I use it to represent the whole section. The key that I can do this is that modern (i.e. OO based) rule engine usage always has a very strong requirement on a clear OO model, as a result, in real work, you do tend to mix rules with OO domain models.

(Note: I merged "logging" together with aop logging, and moved it to aop logging)

1. "OR mapping" (including "UOW", Unit of Work, equiv. to Siebel's BO)

(Note: I moved UI here; I added two new items to it, to expand the concept of OU mapping: "O", "OUmapping")

2. ("O") "entity": rules in entities, rules engine(******), scripting in entities

3. ("U") "Web2 UI": ("ActiveX" and "Ajax")

4. "Data Binding" ("OU mapping")

5. "transaction" (and non-transaction, with "facsades" and "workflows") (business service and workflows, tasks)
----------(I merged "unit testing" (******) together with "transaction")

6. "networking (at application level, as "connected applications")"
--i. async and sync EAI (background) "messaging" for integration and need "user push notification" (activities? or an area on every screen???)
--ii. batch EIM
--iii. offline web client
--iv. reporting server

7. "logging (that is runtime-configurable)"

8. "security"("authorization" and "authentication")

Saturday, December 29, 2007

why Android now? 10 million $

Companies and OS's for handhelds

current companies and OS's for handhelds:

Apple, Microsoft, Nokia, RIM, Palm
OS X, Windows Mobile, Symbian, BlackBerry, Palm

post-processes/ontologies: open eyes on technologies, economies, and politics

technologist, technology analyst, economist, and political analyst

After my previous blogs, I found that I began to see many things differently. It is not easy to summarize all those changes of mind and heart, so, I just put those job titles here.

Note that it is not just "processes" and "ontology" anymore, it is much "thick" now. I now believe the concept of "processes" misses the point. Ya, "processes" are important, but if you know those (8 core) techniques, you surely already know those "processes". "Processes" is simply experiences you gain when you use those "8 core techniques".

As for "ontology", they are extremely important; there is another phrase, "methodological analysis". Here, just like the concept of "ontology" in both computer science and philosophy, "methodological" can be interpreted as both the concept of "process" in software engineering, and a concept in philosophy that is in parallel with "metaphysical" and "epistemological". Combining "ontology" and "methodological analysis", you have all the conceptual tools to tackle any issues in our information age.

I know, all those sounds abstruse and philosophical. That is indeed the exact point I want to make next: they are important, but too abstract, for everyday thinking. To really apply "ontology" and "methodological analysis", we need concrete "examples", hence all those job titles -- we need to open our eyes, on technologies, economies, and politics.

Thursday, December 27, 2007

it is time for handhelds and games -- Google's Android now and Wii later

----------excerpt from (I hope it is "reasonable use"; also, take a look of the "Cell phones in Japan": it is amazing, but I cannot cut/paste it here, please go to James Gosling's blog!)

One of the games was a bowling game that you play roughly the same way that you'd use a Nintendo WII: you hold the phone as though it's a bowling ball, and you go through the motions of throwing the ball. You use a button press to release the ball. When this happens, it does the physics. But the phone doesn't have accelerometers to measure how you move the phone. They used one of the most glorious hacks I've seen in years: images are captured by the camera as you swing it in your hand, which are then analysed and correlated and motion vectors are computed from the interframe deltas.

Note: as you will see in the following, this is not M$ vs Java, it is much broader, although the core is the same, as always.

It is time to expand software development vision to devices and games, for example blackberry (and other "handhelds") and wii.

I mentioned blackberry in my previous blog. To reflect the light of holidays, I should have mentioned games also, e.g. wii. Sorry, I do not like Xbox, and I do not like playstation; I feel "pure computer game" is too boring, I like "physical" game.

My deep believe is that serious computing is enterprise computing, even they look not that exciting or "high tech", in reality, they are the real high tech stuff. However, now I believe it is time to deal with handhelds and games. They are "connected" of course: handhelds need enterprise system, and although/while it is not clear how games can be "incorporated" into the corporate world, there is clearly some effort to integrate wii into the web.

As I indicated in my previous blog, the keys are (a) they are massive market now; (b) the devices are powerful enough so that we can leverage PC experiences.

It is interesting to compare iphone (real product now) and gphone (conceptware) and Android (google), you can see the positive and negative sides of Apple and Google.

I noticed that Apple is also in the process of opening iphone to 3rd party software.
Google is leading its way or catching up (depending your point of view; it is a significant thing regardless which view you take).

Again, the keys are that (this time I put it in one sentence) the java dream that ubiquitous devices and network applications in the hands of the mass is finally coming; there are killer applications (GPS is one of them, I believe camera/scanner is also one of it, and barcode and RFID technologies).

What do all those things mean?

All those things make you think -- we need to broaden our perspective.

In the past, I did that to my 8 techniques: the first step is to expand the 8 techniques to "business processes", and then, "ontology". They are all fine (but I am now putting less and less emphasis on formal processes -- I am more and more convinced that they are simply flexible applications of 8 techniques, nothing more). However, I did not put the dreams of AI and device network (drones of the borg ;-) in the picture.

Saturday, December 22, 2007

time to invest in handheld software development?

Now, handhelds are inexpensive enough to be interesting to the mass.
Also, they are now powerful enough that they are close to PC (or Apple) computers, so that we can leverage the PC/Apple software experiences.

I anticipate that within a few years, everyone will have a PDA, with real key board, GPS, camera/scanner, OCR, recorder, voice recognition.

I will begin to look into handhelds. Basically, they should be just like a PC/Apple 10 or 15 years ago, the only difference is that they are smaller and portable.

Thursday, November 22, 2007

new 8 core concepts, with or without Siebel

This is a major update! I changed the order a little, to reflect Siebel thinking, and OR/OU mapping/binding thinking.

(Note: I merged "logging" together with aop logging, and moved it to aop logging)
1. "OR mapping" (including "UOW", Unit of Work, equiv. to Siebel's BO)

(Note: I moved UI here; I added two new items to it, to expand the concept of OU mapping: "O", "OUmapping")
2. ("O") "entity": rules in entities, rules engine, scripting in entities
3. ("U") "Web2 UI": ("ActiveX" and "Ajax")
4. "Data Binding" ("OU mapping")

5. "transaction" (and non-transaction, with "facsades" and "workflows") (business service and workflows, tasks)
----------(I merged "unit testing" together with "transaction")

6. "networking (at application level, as "connected applications")"

--i. async (it uses direct also) EAI (background) "messaging" for integration and need "user push notification" (activities? or an area on every screen???)

--ii. batch EIM

--iii. offline web client

--iv. reporting server

7. "logging (that is runtime-configurable)"

8. "security"("authorization" and "authentication")

Saturday, November 17, 2007

Mashing up integration UI, service UI, and application UI -- the final reason why web is better than smart client and hence ajax is the key

Mashing up integration UI, service UI, and application UI -- the final reason why web is better than smart client and hence ajax is the key.

First of all, application UI means application itself!

As I pointed out in my previous posts, webmethods' philosophy is interesting but perhaps too radical: we should put application UIs in integration platform (e.g. webmethods), instead of application platform (e.g. M$'s Visual Studio).

I put some thinking about it. Actually, this does not need to be that either/or. As long as you use web, it is just a link away. So, I guess webmethods's philosophy is indeed unnecessarily radical. As long as it is web, who cares.

However, webmethods's philosophy does make it clear that web is the key; further, in order to make it interactive, you need to use ajax.

This also means, "service architecture" and "application architecture" are the same thing, exactly identical, synonym.

Getting deeper into Siebel while keeping everything in perspective

Getting deeper into Siebel while keeping everything in perspective

I am getting deeper into Siebel.

My focus is EIM ("Enterprise Integration Manager", Siebel's name for batched based integration, including the launch-eve initial data load) and EAI ("Enterprise Application Integration", for Siebel, it specifically means non-batch based integration).

This focus is good for me, because I tend to be curious (or ambitious -- or whatever) about the whole thing ("architecture" or whatever name you use to refer the holistic view).

To avoid getting lost in details, I need to set up the context so that the context will tell me the directions.

The "context" must be "native" to Siebel, as a result, "spring" is no good: I cannot mention "spring" to Siebel architects, or any Siebel experts in general. In the context of 5th-level-platform, techniques in 4th-level or 3th-level platform are not received.

However, some abstract concepts can be tolerated. for example, AOP and OR mapping. Also, Siebel experts can certainly tolerate concepts already used in Siebel, for example, UI-O mapping (data binding), rules engine, and workflows.

So, the rule is: do not use "spring" to talk about Siebel; however, do insist on using abstract concepts to talk about Siebel. I will update those items often, until they are stable -- I believe those are the keys for learning and applying Siebel. They are the base of my contributions to the design and spec of all Siebel activities.

"OR mapping" (including "UOW", Unit of Work, equiv. to Siebel's BO)
"Web2 UI": ("ActiveX" and "Ajax")
"Data Binding" ("OU mapping")
"aop entity rules": rules engine and other rules in entities (BC)
"aop scripting" (synonym for AOP in general, both entity and facade -- scripting is injected by AOP)
"aop security"("authorization" and "authentication")
"aop logging"
"aop transaction" (more precisely, non-transaction, workflows)
"aop network" batch EIM and async EAI (background) "messaging" for integration and need "user push notification" (activities? or an area on every screen???)

The key is that for integration purpose, we need a common conceptual ground -- if we cannot even have a common "conceptware", how can we have a common software!

Wednesday, October 31, 2007

EDA/ESB is SOA2: Why fine-grained SOA is good and ESB-based UI is good

EDA/ESB is SOA2: Why fine-grained SOA is good and ESB-based UI is good

------SOA2: As I pointed out in my previous blog, the key of SOA is nothing but EDA/ESB/MOM. As a result, we should say that all those SOA hypes are just "warming-up round, or, round 1" of SOA. The real SOA, i.e., SOA2, is EDA/ESB/MOM.

ESB is MOM with a "service" interface, instead of "proprietary" interface (e.g. JMS. Note that JMS is a standard; however, it is too narrow, so, it is "proprietary" in the big schema of things).

------Fine-grained SOA: because at ESB level, we can do so many things, for free. So, we should make service fine-grained, that is, as fine grained as practically possible. Will that slow down things, like early days of COM or EJB? Ya, but that is where "practically possible" counts.

------ESB-based UI: Some ESB vendors (e.g. Webmethods) put UI support above ESB. It is a powerful concept; we should discard "proprietary UI" (M$'s VS or Java IDEs; of course, except the ones with ESB-based UI -- e.g., Webmethods uses Eclipse for its ESB-based UI), and replace them with ESB-based UI.

(a) Advanced web2/Ajax based controls for free (if you buy the product, of course!).
(b) It makes look-and-feel more consistent.
(c) It enforces SOA: you have to use SOA2 now.

(d) However, note that a good ESB can support using proprietary UI, and still provide all ESB benefits. Actually, it is more an issue of the IDE of the proprietary UI: you can put the service in ESB, and then, in your proprietary UI, you call the API from the ESB. As you can see, there is no reason not using ESB-based UI!

Sunday, October 28, 2007

SOA, EDA, JMS, ESB, webmethods

SOA, EDA, JMS, ESB, webmethods

The real deal of SOA is not XML, contracts etc. crap. The real deal of SOA is JMS (MQSeries etc). Messaging is fine, but people say email is messaging, and also all other web service as messaging. You can say async messaging, but in JMS you can use sync. Some new name is EDA (event driven architecture), ESB (enterprise service bus) EMB (enterprise messaging bus).

WebMethods is the only company that has been working on this and ONLY working on this, as a result, it is a good source for this vision.

The key, however, is from JMS practices (and in turn, MQSeries practice): if you push, then, you need to consider push failure, so, you need async messaging.

Just in case, I am not trying to sell webmethods. As a matter of fact, above I just pointed the way to avoid the expensive stuff: as long as you have a simple queue (either database table, a file, or MSMQ) to handle failure, and you have a mechanism to recover the failures (either automatic or manual), then, you are fine. Who needs expensive software packages! -- I study them because they are "free" to me (I mean, the company bought it!), and also, I learn those full-fledged stuff simply to see how I can accomplish the same thing with plain old code, perhaps just 10 lines!!!

Great people think alike and the same time :-) Here is a link to Vikas's blog:

Thursday, October 18, 2007

more siebel notes later

My previous blogs are just the first round of my Siebel notes.
I will put more siebel notes later. As I explained before, I used spring to see internals of Siebel and to see how siebel can work with other systems; and I use siebel as the source of architecture best practices (like OR mapping, rules engine, etc.).

siebel internals and integration: spring

siebel internals and integration: spring

This post is really short: why we need to think about spring when we do siebel? because siebel is "third-party", we do not know its internals. Spring can provide insights.

Spring is the "standard" best-practice framework on both C#/.net and java, so, it is the best reference for integration also.

stored procedures on oracle -- more words

stored procedures on oracle -- more words

-----------------how (this should be a very simple system, because we do not use them that often -- but when we use them, we really need them immediately, so, the how should be lightweight and can be done everywhere! see "when" below)

For Oracle SP, the key is how to debug or test a sp easily:

1. It is crucial that we develop sp using both sqlplus and other query tools (e.g. toad, even the "simplified" toad). --- More secifically, we need to get used to test-run a sp in "simplified" toad.

2. It is also crucial to "print out" things, using exception and a adhoc table.

1. They should not be used in routine, ordinary, everyday-practice situations.

2. If you really want them (for consistency sake, or, for people who are used to use sp as a managerial control tool), only use simple CRUD. Do not try to use "data logic" to cover up putting "business logic" in sp. "Data logic" is only an optimization issue; so, there is no such a thing as "data logic".

3. SP can and should be used to optimize some logic when necessary.

4. Also, for some aspect of business logic that is already well-isolated (i.e., will not likely to be re-used), and it is pretty low-level, then, we can say they are "data logic" -- but again, there is no such a thing that is wide-spread layer of data-logic!, and we can do it in sp.

A pragmatic piecemeal route towards Ajax and silverlight

A pragmatic piecemeal route towards Ajax and silverlight

1. No third-party controls, e.g. no ComponentOne
--If really need it, just use the a few controls in a limited places that you really need them (e.g. calendar).

Why? because they simply trouble. Up-front trouble, in development trouble, support trouble, and upgrade trouble. Trouble.

2. Instead, continue to use M$ "build-in" controls.
(a) However, use more "check boxes" to increase user interactivity features (so that it is obvious that we need ajax!
(b) Also, begin to use javascripts: do not use them all at once; however, use them seriously. Do not just hack them. Before you know it, you will have a lot of javascripts, and people get used to it.

3. Introduce M$ Ajax server controls; however, use them judiciously, not large scale and systematically. Use them only to make people used to ajax.

4. Use client side library (the best, ext and yahoo), but wrap them using server controls yourself.

5. If you can use alpha in some environments (i.e. not "mission critical"), use silverlight as the primary technology (hence, javascript is only the glue, not the primary language).

Sunday, September 16, 2007

simplified command pattern (non-emit) AOP: anonymous delegate and generics

simplified command pattern (non-emit) AOP: anonymous delegate and generics

In my previous blogs, I mentioned that with C# 2.0, we finally can do similar things like java's anonymous inner class for object-adapter pattern. This will simplify command pattern AOP.

Basically, it is an easier and better way of strict inheritance of template method pattern.

Generics here is just a helper, it makes the "template" more like, well, a template -- you do not really need it, a lot of times, if your template does not need to be general.

Note that this is still too expensive, so, it can only be used for facade-level methods; so, it cannot replace real (i.e. emit-based) AOP that can be used for entity get-set methods.

I know, I used too much "patterns" here, so, here are code -- copied/pasted.


public static EmployeeDetails FindEmployee(int empId)
WindowsIdentity id = HttpContext.Current.User.Identity as WindowsIdentity;
WindowsImpersonationContext wic = id.Impersonate();
BrowseOrganizationServiceWse bosw = new BrowseOrganizationServiceWse();
FindEmployeeResult fer = bosw.FindEmployee(empId);
EmployeeDetails ed = fer.Item as EmployeeDetails;
return ed;

--------------------------------templeate ThreadStart deledagate is simply return void

private static void RunMethodImpersonating(ThreadStart method)
WindowsIdentity id = HttpContext.Current.User.Identity as WindowsIdentity;
WindowsImpersonationContext wic = id.Impersonate();


public static EmployeeDetails FindEmployee(int empId)
EmployeeDetails ed = null;

BrowseOrganizationServiceWse bosw = new BrowseOrganizationServiceWse();
FindEmployeeResult fer = bosw.FindEmployee(empId.ToString());
ed = fer.Item as EmployeeDetails;

return ed;


// cmd is from IDbCommand (SqlCommand, OracleCommand, OleDbCommend etc) type.
// This is the command which we want to run against our database.

using (IDbConnection conn = ProviderFactory.CreateConnection())
cmd.Connection = conn;


// use the cmd object.

} //"using" will close the connection even in case of exception.
catch (Exception e)
// 1. Trace ?
// 2. Rollback transaction ?
// 3. Throw a wrapper exception with some more information ?


public delegate T CommandHandler(IDbCommand cmd);

/// Simple command executer "design pattern".

/// The type to return
/// The command
/// The handler which will receive the open command and handle it (as required)
/// A generic defined result, according to the handler choice
public static T ExecuteCommand(IDbCommand cmd, CommandHandler handler) //*1
using (IDbConnection conn = ProviderFactory.CreateConnection()) //*2
cmd.Connection = conn;

// Trace the query & parameters.
DatabaseTracer.WriteToTrace(TraceLevel.Verbose, cmd, "Data Access Layer - Query profiler"); //*3


return handler(cmd); //*4
} //"using" will close the connection even in case of exception.
catch (Exception e)
// Trace the exception into the same log.
Tracer.WriteToTrace(TraceLevel.Error, e, "Data Access Layer - Exception"); //*5

throw WrapException(e); //*6

public delegate T ReaderHandler(IDataReader reader);

/// Execute the db command as reader and parse it via the given handler.

/// The type to return after parsing the reader.
/// The command to execute
/// The handler which will parse the reader
/// A generic defined result, according to the handler choice
public static T ExecuteReader(IDbCommand cmd, ReaderHandler handler)
return ExecuteCommand(cmd,
delegate(IDbCommand liveCommand) //*1
// This is the anonymous delegate handler.
// REMINDER: The original template sends the live command as parameter.
IDataReader r = liveCommand.ExecuteReader();
return handler(r);

/// Retrieve the persons according to the specified command.

/// Typed collection of person.
public static List GetPersonsList()
IDbCommand cmd = ProviderFactory.CreateCommand();
cmd.CommandText = "SELECT Name,Age,Email FROM Persons";
cmd.CommandType = CommandType.Text;

return DalServices.ExecuteReader>(cmd,
delegate(IDataReader r)
List persons = new List();

while (r.Read())
// Create a Person object, fill it by the reader and add it to the "persons" list.
Person person = new Person(r["Name"].ToString(), Convert.ToInt32(r["Age"]), r["Email"].ToString());

return persons;


/// Retrieve the persons xml according to the specified command.

/// Xml representation of the persons.
public static string GetPersonsXml()
IDbCommand cmd = ProviderFactory.CreateCommand();
cmd.CommandText = "SELECT Name,Age,Email FROM Persons";
cmd.CommandType = CommandType.Text;

return DalServices.ExecuteReader(cmd,
delegate(IDataReader r)
StringBuilder builder = new StringBuilder(500);

while (r.Read())
// Create a Person object, fill it by the reader and add it to the "persons" list.
Person person = new Person(r["Name"].ToString(), Convert.ToInt32(r["Age"]), r["Email"].ToString());

return builder.ToString();

/// Execute the db command in "NonQuery mode".

/// The command to parse
/// Affected rows number
public static int ExecuteNonQuery(IDbCommand cmd)
return ExecuteCommand(cmd,
delegate(IDbCommand liveCommand)
return liveCommand.ExecuteNonQuery();

/// Execute the db command in "Scalar mode".

/// The type to return after parsing the reader.
/// The command to execute
/// A generic defined result, according to the handler choice
public static T ExecuteScalar(IDbCommand cmd)
return ExecuteCommand(cmd,
delegate(IDbCommand liveCommand)
return (T)liveCommand.ExecuteScalar();

Saturday, September 01, 2007

Why and How to use server side ajax – even you do not like it – part 2

Why and How to use server side ajax – even you do not like it – part 2

As my previous blogs show, my basic estimate is that it will take one or two years for Ajax controls to be really mature so that we can safely wrap them on the server side (and still use javascript API). Before that, it is simply not cost-effective; we will spend a lot of time on dealing with bugs, limitations, and covering-ups. You can avoid all those by learning to treat javascript as an engineering language and is required for every developer.

However, it is not at its prime time, yet; its prime time will be after a few months or half a year after Orcas release; as a compromise, at least before Orcas, I have to use C1WebGrid etc., instead of client side ajax controls (the best is ext, and yahoo is also good).

As I pointed out in my previous blogs, the key is to pay attention to the javascript API.

However, the problem is, you cannot find those APIs! Although they are advertised in some fancy wording, the reality is, they are not ready yet! Actually, the reality is even uglier: because the engineering section is not really ready yet, the whole thing is in the hands of marketing people, they do their best to do their spin to make the weakness of the product into features. For example, they turn the fact that there is not client side javascript API into something “you can use ajax automatically” – you would think that since you can do it automatically, so, you certainly can do it manually – no, you cannot! The reality is: the product is not ready for that yet!

A good example is the C1WebGrid and C1WebDialog. C1WebGrid is not really ajax ready yet (ya, it has canned ajax features; that means almost nothing nowadays!); C1WebDialog is a little better. However, both are marketed as ajax ready. If you believe those marketing hypes for C1WebGrid, then, you will really be disappointed, or, misled, by thinking that “ajax can only so little”. As a result, in such a situation, user developers have to lower their standard, and use postback mostly, and use callback whenever they hit one by chance and luck. Note that some marketing materials lead you believe that update panel does not use postback. That is not true. Update panel uses postback. As a result, if it is not client side API, it uses postback, even it is not the whole page postback. As a result, we need to talk about how to do Postback or updatePanel in a way to limit their damages.

Updatepanel postback is still bad, comparing with real client side API call back, for two reasons: (a) Postback always invokes the whole machinery of postback, so, it is bad for performance hit. However, this part of it is not big deal for us, because its damage is only performance, as long as it is fast enough for users, we are fine. (b) More importantly, a lot of times, postback has to use a twisted way to do things. For example, the "multi-value selection shuttling grid pairs": you have two grids, the left or top grid, and the right or bottom grid; the left one is for the available, the right one is for the selected; so, you do the selection by move a row from the left to the right. The selection action is a pure client side thing, until you click “submit selections” button. However, postback makes this a server side operation. Worse, sometimes, to make things perform better, you have to create a workaround on the UI: you put a checkbox on each row, and let users select a few rows, then, click a “select” button ( but this select button is not the “submit selections” – the reason is that there is paging on the available grid – that is actually the reason why we need to shuttle the selected one to a new grid) – the whole thing is getting more and more confusing when you try to put more and more workarounds.

I believe we have a few options:

(i) Find those workarounds;

(ii) Say no to users;

(iii) Buy more new controls, for example, a shuttle control. You will need perhaps a dozen of controls. This has two sides:

(1) new controls has more canned-ajax (i.e., build-in ajax without javascript);

(2) new controls has more javascript APIs.

Obviously, iii-(2) is our real hope! However, iii-(1) is actually also a good one – once you have a dozen pretty good controls, your javascripting will be very “regularized” – if you need to do any javascripting at all! So, the approach is that we will try to push to buy more new controls. Most of the time, it has to be from the same vendor -- for political reasons; however, sometimes, as long as it is “server side controls”, it is also fine.

In short, here is the summary:

(a) For now, for some old so-called ajax-enabled server side controls, most of the time, we have to use postback/updatePanel, insteald of real callbacks;

(b) We buy most recent server side controls, for both purposes: (i) we can begin to use a little bit javascript API, and (ii) we do not need javascripts directly.


Below are the notes that I study the user manual of C1WebGrid.

Note: You must create a ComponentOne Account and register your product with a valid serial number to obtain

support using some of the above methods.
The ComponentOne Studio for ASP.NET 2.0 installation program will create the directory C:\Program

Files\ComponentOne Studio.NET 2.0. This directory contains the following subdirectories:

bin Contains copies of all ComponentOne binaries (DLLs, EXEs).

Common Contains support and data files that are used by many of the demo


Help Contains online documentation for all Studio components.

C1WebGrid Contains samples for C1WebGrid.


Select the ComponentOne C1WebGrid assembly from the list on the .NET tab or browse to find the

C1.Web.C1WebGrid.2.dll file and click OK.


an assembly resource file which contains the actual run-time license (host dll or for asp 2.0 App_Licenses.dll assembly)

a "licenses.licx" file that contains the licensed component strong name and version information (Show All Files on the Project menu) ---- to add the file, add the control to the form (then, can delete the control)


Right-click C1WebGrid and select Show Smart Tag from the context menu. Select Property builder from

the C1WebGrid Tasks menu.


1. Select the C1WebGrid component.

2. Select Properties Window from the View menu on the Visual Studio toolbar.


1. Right-click C1WebGrid and select Auto Format from the context menu. The Auto Format dialog box opens.