Monday, September 18, 2006

Why Oracle .Net Dataset development needs stored procedures

If you use OR mapping, then no need to use stored procedures.
Also, if you use SQL Server, even if you use dataset, it is not that bad not using stored procedures.

However, if you use Oracle, and if you use dataset, then, you need to pay extra for not using stored procedures (the upside: you learn a lot to manipulate the dataset mechanism – but it is pain though).

1. Stored procedure adds another indirection, therefore, level the play ground of SQL server and Oracle. So, if you use dataset, you’d better use stored procedures, to leverage the features that SQL server enjoys. For example, insert with a refresh and with sequence id.

2. Oracle ODP supports arrays as input parameters. Combined with stored procedure, this can be easily used in batch processing. Note that because of the extra installation, for two tier clients, it is possible that you choose not using ODP. Even then, you can pass a string, XML or plain old comma/bar formatted, for batch processing.

Note that I feel OR mapping is the way to go, so, in the future, it is likely stored procedures are only used in rare cases; however, we live in the “present”, not in the “future”. Let's use stored procedures.

However, let's keep it as thin as possoble, i.e. no “real” business logics.

Saturday, September 16, 2006

Quick research on the idea of aspectize databinding support

I did a quick research on the idea of aspectize databinding support (see my previous post http://survic.blogspot.com/2006/09/can-we-use-aop-to-mix-in-databining.html#links)

I googled "databinding and nhibernate". (why? because I knew hibernate is pure OO. To make it to support databinding, you either add that via code generation, or, using aop.
I could just google “aspect and databinding” – somehow, I googled nhibernate first. I guess I feel a concrete solution is more searchable than an abstract concept – Later, I did the latter also, same result.


http://chanwit.blogspot.com/


I will put more thoughts in my comments.

Use browser automation as an web offline backup tool

Use browser automation as an web offline backup tool


Pro: Browser automation can be very insensitive to the technology of the web site -- if you can read it, you can save it.

---- To save text content: just “select all” and “copy” and then “paste” to another file.

---- To save pictures: for very short distance, right click it, if it is a picture, save it (then, do not save it until we have at least one non-picture to avoid save the same picture many times).


Con: It is very sensitive to the content/layout; so, it cannot be totally automated (therefore, not abused). You can use web testing automation for it; and just like web testing automation, you first need to “teach” it by recording and even some tweaking (especially for some looping).

Also, the result is not identical to the pages you want to backup. You only get what you did in the recording or custom tweaking. However, usually it is sufficient for saving materials referenced in your notes.

---- This is a good example of personalized browser/office automation.

Friday, September 15, 2006

Java’s generics and .Net’s typed dataset

This blog is simple and sweet -- just an insight:

Java’s generics is like .Net’s typed dataset,

or – to make it a little bitter java ;-)

.Net’s typed dataset is like Java’s generics.

Sunday, September 10, 2006

Typed Dataset's validation and custom classes' collections

Typed Dataset's validation and custom classes' collections:


A. When use Custom Class: the key is the Custom collections:

(a) the best is to use IList and use AOP to support databinding (but I do not even know it is possible or not!)
---- “low tech hack”:
(i) use UI code to trigger refreshing when code-changing value in grid;
(ii) use bindingSource to sort, search (note: for sorting, the comparable is in the item class itself, not in the collection!)

(b) second choice: use IBindingList (it is .net build-in)

(c) third choice, use one or a few custom-made generic that expand IBindingList
(i) IBindingList needs to add search/sort (override AddNewCore etc.)
(ii) IBindingList is not enough, because collection’s AddNewCore need to add back-pointer to the object it adds. This is needed because ICancelAddNew is new in 20; some controls do not know to call it. (in CSLA, this is the reason for IEditableCollection).

(d) Use a lot of specific strongly typed collections: (the only positive): there is a visual thing: you cannot drag a generic collection. Since a generic collection is not “real” yet, so, it makes sense; however, why not use asp approach: use the returning collections from a façade method, or/and, ask user to fill in the parameter type of the generic collection – this is a good addin for visual studio. (low tech hack) to use the visual, you can simply put the closed generic collection in a dummy class.



B. When use DataSet, the key is the Validation Logic:

----Validation logic: because we do not want to rewrite the set/get methods in dataset, so, we need to use events. However, the validation must not depend on the UI level. We can use the exact same technique in CSLA to centralize it (or the “original” centralized, but just a big switch – it is fine also). By doing this, we can still do automated unit testing.



C. Other things in Databinding

---- we can use general ICustomTypeDescriptor and attributes to configure the public “columns”

---- In simple implementation, IEditableObject does not need to be recursive, because child collection means another grid, so, will EndEdit anyway.
----databinding all uses propertydescriptor (a way of reflection, but with extra indirect of this “interface” (TypeDescriptor, ListBindingHelper), so that we can use the same code for dataset and custom class). So, if we add “AddValueChanged”, the list will be notified. However, most code-change is via direct code, not reflection, so, must also use INotifyPropertyChanged

Saturday, September 09, 2006

auto-webservice without “4-CRUD or 1-Execute” limitation and without strong typed collections

vikas has a good (I would say the best) design:
http://www.blogger.com/comment.g?blogID=11393788&postID=115750673939699075

---- It has real webservice interfaces, but the web services are also used as if they were remoting (hence "auto"), without extra constrains on those interfaces; and without special strong typed collections.


-----------------------------------------Below are short blog comment-dialogs
---Object Manager?
---Yes, I checked your diagram again. CSLA's "Data portal" corresponds to the whole objectManger-WebService-GateWay mechanism.

I wonder why you do not simply adopt CSLA’s datdaportal? I am really curious.

Is it the “4-CRUD or 1-Execute” limitation (and all those "criteria, exists" workarounds)? To me, the “4-CRUD or 1-Execute” limitation is very serious, because it tends to twist the domain model.

I like your design: you use code generation to remove the problem of the “4-CRUD or 1-Execute” limitation. “Emit” is also good; just timing, maintenance, flexibility tradeoffs -- also, we need to do the code generation anyway (I am too addict to aop; code generation is more flexible and down-to-earch).

The simplest way is to remove "dataportal" and ignore the limitations/workarounds; because most of time, we use either web or two-tier winform, no "app servers"/web services. We can always add that mechanism when we really need it. However, as you said, if you have it, why not use it, to make all things consistent.


---We would be left with 0 strong typed collections after that.
---wow!!!

-----------
No “4-CRUD or 1-Execute” limitation, no strong typed collections – congratulations, your design is one of the very few best!!!

Thursday, September 07, 2006

can we use aop to mix-in databining support?

To mix custom class and dataset, elegantly, it means to remove the “framework taste”, i.e., remove those inheritances in the collections.

An idea: can we use aop to mix-in databining support? by doing this, in the object model, it is just ordinary collections!

Just an idea – this is not necessary; but it means Q continuum is not just low-tech messing up. Continuum is cool.

For now though, I will just focus on the mundane stuff. I will be back to t his idea later, later.

Monday, September 04, 2006

Practicing XP in its extremes: VB/Ruby tradition plus unit testing

What am I doing – all those “no framework approach” stuff?

I am starting a project that is the beginning of many loosely connected small projects.

It is TDD in its extremes. It seems that everything TDD says is literally in reality: little official design, consciously anti-framework, on-cubical customers ...

You may also say it is the traditional VB style, you just put a TDD label on it! – yes, I totally admit it, only with one point-out: the team is currently all painfully realizing that we need regression testing, and QA cannot deliver it, and it is not their fault at all and cannot be "fixed" in any foreseeable future – so, we all know that developers must take the responsibility to provide regression testing. To implement that feature, we, very naturally, have automated unit testing.

I have been theorizing (material for pattern-writing?) about OTTEC. Now, it is time to use the theory, tentatively, of course.

I mention Ruby, because nowadays in web development it means the same thing as what VB represented years ago in winforms development: productivity at the level that it is done immediately when you have done the analysis; therefore developers are the analysts, same person, same time.

books for “no framework approach”

books for “no framework approach”:

Data Binding with Windows Forms 2.0: Programming Smart Client Data Applications with .NET http://www.amazon.com/exec/obidos/tg/detail/-/032126892X?v=glance

Further, if you really try to understand dataset, please check out CSLA http://www.lhotka.net/cslanet/default.aspx .
In the past, I used the data binding book to understand more CSLA; now, I am proposing that you read CSLA to understand the data binding book.

You need to think about CSLA custom objects (without the portal part), then, you can really understand and appreciate dataset, and how to use it with simple ("no framework approach") custom objects without custom collections (again, "no framework approach").

“No framework approach” does not mean “no thinking approach”, or “mess-creating approach”. Just like TDD, a good “no framework approach” requires strong discipline and deep understanding. For developers, it is actually much more challenging and interesting than the usual framework-oriented approach -- the world is yours; you are fully responsible; I know, it also reads “not invented here” – again, trade-off’s ;-)

-------------related but another topic

I know office programming was not serious programming; especially not enterprise computing. However, VSTO is changing this:

Visual Studio Tools for Office: Using C# with Excel, Word, Outlook, and InfoPath
http://www.amazon.com/exec/obidos/tg/detail/-/0321411757?v=glance

Sunday, September 03, 2006

8 Core Programming Techniques v7

8 Core Programming Techniques v7

List of updated items:

0. Changed "Basic" to "Core": this is more about "C" than "B" ;-)

1. Changed the heading “Higher Level Architecture Reading and Lower Level Skills” to “Higher Level Architecture and Process Reading and Lower Level Skills”, and added “RUP and TDD/XP”.

2. Added content for the “no framework approach”, to reflect the reality better in this document/blog – this document/blog was very much “framework-oriented”, perhaps too much.

==============================8 Core Programming Techniques

Key Background Knowledge:

1. Higher Level Architecture and Process Reading and Lower Level Coding Skills

Those 8 techniques are described in a top-down style – I assume there is only one (basic) “architecture”, and you know THE architecture. I know it is unusual that we need to talk about THE “architecture” before we talk about the “core” techniques. Actually, that is NOT that “unusual” -- without knowing those architecture diagrams, a programmer is working in the dark. Again, note that there are many diagrams, but they all convey the same basic architecture.

Reference diagrams from Sun (“Core J2EE patterns”) , Ejb community (“EJB Design Patterns”), and MS-MSDN best practice (“application architecture”):

http://java.sun.com/blueprints/corej2eepatterns/

http://www.theserverside.com/tt/books/wiley/EJBDesignPatterns/index.tss

http://msdn.microsoft.com/practices/topics/arch/default.aspx?pull=/library/en-us/dnbda/html/distapp.asp

As for “Process”, please google “software process RUP or UP” and “TDD”.
--------------------------
Note that the above mentioning of “architecture” and “Process” does not mean we need an explicit architecture and an explicit “process” before we can develop software. Further, theorizing from the reality, very often we can see the “no framework approach”:
--------------------------
The rule of the approach is simple: no third party tools, no third party framework. Further, no in-house “official” big framework, no in-house “official” big “utilities”; small utilities or code-reuse is encouraged. Code generation is forbidden also. Note that this includes tools like Nunit. Note the “framework” is especially forbidden, because by its very nature, “frameworks” are include-all; they are very powerful, therefore, very harmful.

Let’s be crystal clear: it does not matter whether it is binary, commercial, free, or open source. For example, no third party controls, even those once that we can buy the source.

This does not mean we cannot use third party software, for example, we use Source Safe, and add-ins for visual studio, as long as they do not affect the code.
--------------------------
“No framework approach” is preferable, usually for the following reasons:

(1) Fast to develop, no upfront learning curve
(2) Easy to upgrade, easy to maintain, by anybody, at anytime
(3) Developers learn a lot in such environment – I know, this is surprising, but it is true. This is also the core spirit of XP/TDD. Also, this does not mean we do not study frameworks. We do; then, we tear them apart, or wrap them, and only use the best one karat out of tons of stuff.
(4) For maintenance development, you have no choice but using this approach.
--------------------------
In addition to the above, we also need low level coding skills, i.e., “(OO) design patterns, generics, AO design patterns”.

2. General Reading

Computing changes fast; therefore, experience alone is not enough. Learning through hand-on experiences is not enough. Attending conferences, user groups, and training classes is necessary. However, the most important means is through reading (including listening/watching videos, of course).

Because reading is so important, and because it is so intensive and lasts so long for computing career, it is actually not as simple as it sounds – there are many things need to be continuously pay attention to: how to make notes, how fast is the reading, how close the reading with the daily work, how close the reading with some hand-on experiments, how to find materials, structured or non-structured (e.g. blogs ;-), how to buy those materials, etc.

-----------------------------------------------
-----------------------------------------------
-----------------------------------------------
---------8 Core Programming Techniques per se
-----------------------------------------------
-----------------------------------------------
-----------------------------------------------


----------------------backend techniques
----------------------The only thing I can argue about grouping logging with databases, is that for a long time, I always put them together. Perhaps it is because the most important logging is database access logging; perhaps it is because logging is somehow similar to database anyway. Note that the second logging in the middle-end is different from the first logging in the backend. The second one is focusing on the “aspect” aspect of it (the “client-side” of the logging); the first one is on the logging itself (i.e., the “server-side” of the logging).

1. Use log4net, in a simplistic style (avoid fancy stuff)
2. Use MS Ent Lib’s data access block
2’. Use Nhibernate, instead of Ent Lib, when we are in “advanced” situations

----A.K.A. Separation between database access and business logic (“Three goals for architecture design”)
----A.K.A. Reliable and flexible SECURITY-AUDITING(it is code-based, i.e. not database system based)("three common features for any systems or applications")
----For “no framework approach”:
-------- Logging: we need to wrap log4net anyway. Actually, this is a general “sister approach”: let’s wrap it. We can use the thin wrapper in the beginning. If later we really need it, we can “violate” the rule and introduce log4net later;
-------- Data access: the earlier version of the application block is just a few files. So, this is certainly easy to do;


----------------------middle-end techniques
3-4-5-6. Use the DynamicProxy in castle framework to centralize remoting(3), transaction handling(4), security(5), and logging(6), at the façade level.

---- A.K.A. Separation between cross-cut concerns and business logic.(“Three goals for architecture design”)
---- A.K.A. Reliable and flexible SECURITY-AUTHORIZATION ("three common features for any systems or applications")
---- For “no framework approach”:
-------- Use code regularity and snippet, instead of command/code generation/emit, for all façade level cross-cut concerns (logging, security, transaction, remoting);


Note 1: This is aop and we need to use emit to do it. Note that this removes a lot of needs for coding-time code generation via tools like codesmith. Another side of it is the snippet in IDE. Note that there are other timing choices for code-generation, like asp.net. Coding time code generation is a very powerful tool for programmers. However, it should not be used as a "fix" for architecture design problems – generated repeating code is still duplicated code. In short, declarative programming is the way to go; and it requires code-generation; we need a whole range of timing choices do that, from language build-in to runtime manipulation.

Note 2: In order to do that, Use programmatic IoC (i.e., centralize all “Factories”); but for most projects, do not use XML style IoC (that is too heavy)

--- to avoid client casting, use factoryMethod
--- use abstrctFactory for switching easily
--- put class name, method name, variable name in the factoryMethod


----------------------front-end techniques
----------------------You may ask: why you put unit testing in the front-end group? Reason: I could put it in the middle-end. However, for a long time in my mind, it has always been in the front-end group, perhaps for two reasons: (a) it is an alternative of the UI code; (b) its very existence depends on the diligence of keeping UI code thin and clean -- remember, I do not believe in unit testing UI code -- that is the next generation stuff; not nowadays everyday practice.

7. The above security and logging can be at property level (i.e. "entity" level, instead of "facade" level or "data access logic" level);
7'. Use the custom class business rule validation technique in CSLA; but via AOP (see above about AOP, but at property level).
7''. Use the “custom class databinding” technique in CSLA; but via AOP (see above about AOP, but at property level).
--- class-to-form using custom event
--- form-to-class (onchange; no exception in set; use rule)

--- tip: listview/read-only-grid read-only: this is good for user anyway;
so, no need of typed collection (note: sorting etc are control's business)
--- tip: update's return object: only get the seq and update the property
do not try to replace the whole object
--- tip: use read-only properties to "borrow" fields for m-m-like
--- tip: inheritance or parent -- pass reader, tx, parent etc.


--- "CSLA with dynamic proxy/aop" (and without collections)
(a) not limiting to 4 (CRUD) methods;
(b) so, no need of special workflow objects,
(c) so, no need for special “criteria” and “exists” classes.

(d) At property level, cleaner property/get/set,
because no CanRead/WriteProperty, PropertyHasChanged calls.

(there is more background info in this blog:
http://www.blogger.com/comment.g?blogID=26752431&postID=115078085354288833
)

--- leverage the full power of the “custom class” in the age/context of “SOA”:
Javascript subset C# to Javascript translator. The good news is that I believe I found it in Ajax framework community.

8. Use Nunit, but only on façade methods.

8'. 3-in-1 document writing for Spec, Test script (Fit or Fit-like), and User Manual: Unit testing is the pivot in all development activities. However, that is easy to say than do. To get that done in a smooth and controlled manner, we need to take a larger perspective, hence this item. For details, please read my related blogs.

---- Note this does not mean this 3-in-1 document writing takes away the pivotal position of unit testing. Unit testing is still the king – from a developer’s biased/tunneled perspective anyway :-)

---- Also note that by this small step, we are combining XP (TDD) with RUP/UP/ISO 9000/SOX. Because the key idea of the latter is “document driven”; and “3-in-1 document writing” is “document driven”. I really believe that in the electronic world, when we routinely take notes in Word (OK, or “StarOffice”? – actually I take notes using VIM, so, do not flame me for being M$ centric) and emails, there is no justification to use those story cards or ORC cards or pieces of napkin anymore – although they sound romantic. In other words, we can expand this further: taking electronic notes is one of the eight core programming techniques. I know it sounds craze; however, and take some observation yourself:

In meetings, how many people are taking notes using a paper notebook? I guess it will still take a while for us to use a notebook computer, a tabletPC, or, a smaller PDA, or, just use the “old” technique that use a piece of paper and then later put the notes into a computer. The latter practice requires discipline; but it works. Regardless, if you really think about it, contrary to what XP/TDD tells us, those paper notes are not and cannot be simply thrown away; as a result, those paper notebooks are at least one of the sources of unagility.


---- A.K.A. Separation between UI and business logic (“Three goals for architecture design”)
---- A.K.A. REGRESSION TESTING (scripted testing, automated testing).("three common features for any systems or applications")
----For “no framework approach”:
--------Unit testing: we can use the same idea, but just use a few in-house methods (AssertEqual etc).
-------- Databinding/validation-formatting: no collection for custom classes, use dataset instead (however, if not collection, then, custom class can be used when needed).

A No-framework Approach

I know this is very controversial. So, I want to say the following:

Firstly and utmostly, I am only a messenger.

Secondly, I am also a theorizer: I try to elaborate the reasoning.

Thirdly, I have been practicing it in some projects. I am not saying I have been doing it very enthusiastically; but I do find it has its merits.

--------------------------
The rules of the approach are simple:

No third party tools, no third party frameworks. Further, no in-house “official” big frameworks, no in-house “official” big “utilities” (small utilities or code-reuse are encouraged, of course). Code generation is forbidden also. Note that those "no's" include tools like Nunit (i.e., no Nunit!). Also note that “frameworks” are especially forbidden, because by their very nature, frameworks are very powerful, therefore, very harmful.

Let’s be crystal clear: it does not matter whether it is binary, commercial, free, or open source. For example, no third party controls, even those ones that we can buy the source.

This does not mean we cannot use third party software, for example, we use Source Safe, and add-ins for visual studio, as long as they do not affect the code.

------------------------------

I believe this approach is from VB tradition; however, with unit testing, it can also be a variation of TDD (however, the irony is that Nunit if forbidden also!).

Why it is preferable? There are the following reasons:

(1) Fast to develop, no upfront learning curve
(2) Easy to upgrade, easy to maintain, by anybody, at anytime
(3) Developers learn a lot in such environment – I know, this is surprising, but it is true. This is also the core spirit of XP/TDD. Also, this does not mean we do not study frameworks. We do; we study, then, we tear them apart, or wrap them up, and only use the best one karat out of many tons stuff.


Why this approach is still applicable now? – Because .net 2.0 is more mature. For example, we have a much better grid, so, there is no pressing need for third party grids.


OK, let’s examine the "eight things for enterprise computing", to make sure:

(1) Logging: we need to wrap log4net anyway; so, we can use the thin wrapper of the log4net in the beginning. If later we really need it, we can “violate” the rules and introduce log4net later;

(2) Data access: the earlier version of the application block from M$ is just a few files. So, this is certainly easy to do;

(3) – (6) Use regularity and snippet (i.e. use copy/paste approach), instead of command/code generation/emit, for all façade level cross-cut concerns (logging, security, transaction, remoting);

(7) Unit testing: we can use the same idea of Nunit, but without using the software! We just use a few in-house methods (AssertEqual etc).

(8) Databinding/validation-formatting: no collections for custom classes, use dataset instead (however, if no need of collections, then, custom class can be used when needed). [update: no dataset! so, just simple direct ado reader; we can use reflection to simplify it a little].

Why some people do not like the book "Windows Forms 2.0 Programming"

Why some people do not like the book "Windows Forms 2.0 Programming"

http://www.amazon.com/Windows-Forms-Programming-Microsoft-Development/dp/0321267966/ref=pd_sim_b_4/102-8993671-5704963?ie=UTF8

"Windows Forms 2.0 Programming" is one of my favorite. However, I recently accidentally read the comments in amazon for it; I noticed some negative comments, which prompted me thinking (I did not post on amazon; there are sufficient positive comments already).

The reason is that this book is not a “tutorial” book; it is an insight-summary book for all msdn on winform. It is in the same category of “CLR via C#”, “effective C#”, etc., but its “domain” is on winforms.

The problem is that nowadays, winforms are perceived “easy”. So, all books on it are perceived "easy".

It is not an easy book, contrary to its appearance. It can be complemented with the following -- which is not easy either ;-) -- but combining them together makes them easier, or even "easy"):

Data Binding with Windows Forms 2.0: Programming Smart Client Data Applications with .NET

http://www.amazon.com/Data-Binding-Windows-Forms-2-0/dp/032126892X/ref=pd_bxgy_b_text_b/102-8993671-5704963?ie=UTF8


Which first? That is an "hen or egg" question -- but if you really want to push it, I would say, for holistic learners (i.e. book worms), Windows Forms 2.0 Programming is the first book.

Saturday, September 02, 2006

Revisit dataset – let’s use both dataset and custom class without custom collections

Revisit dataset – let’s use both dataset and custom class without custom collections

[update: see my later blogs for updates. In short, dataset is bad; this blog only reflected the enthusiasm when I started to use a new feature from M$. This is just yet another example why we need to test technologies out in a small project, especially when those technologies are from M$ -- the reason? Because of the nature of being the monopoly vendor, M$ tends to release technologies without community discussion; instead, it tends to “create” some technologies in secret, and simply push the technologies by marketing – more problematically, in the past, when it created the technologies in secret, M$’s focus was always the “mort” market – that makes the technologies from M$ often problematic for serious enterprise computing. Again, strongly typed dataset is just another example. It is good that I blogged it here; now we have a real example why sometimes I am suspicious of the marketing hypes; however, that suspicion does not prevent us from trying. Again, always try new technologies in small projects; always be hand-on].


I read the Rocky’s post:

http://www.lhotka.net/weblog/CommentView,guid,ad5be814-6063-43e0-b703-932771444b98.aspx

It has been in my mind ever since.

If validation (and property level authorization) can be done cleanly, then, as long as at method level (no need to be at class or dll level – come on, CSLA’s business objects are also a mix of entities and façades) UI and business logic are cleanly separated, then, what the hack, let’s use datasets.

Dataset (especially typed dataset) is "simple-minded" OR mapping. I know it is called as “table module http://www.martinfowler.com/eaaCatalog/tableModule.html , but I do not want to use that name here. To me, if it is in-memory, it is “object” by definition; so, anything helps to access relational database from in-memory is OR mapping, by definition. The difference is whether it is a simple one, or a complex one.

------------------
I guess this is the problem of pattern writing. When you write a pattern, you emphasize the difference: what is the special thing about dataset. In order to do that, you give it a name, implying that its is stable, and has some “essence”.

This is harmful when the “thing” is in flux. The real “essence” of dataset is OR mapping as simple as possible.

By putting dataset in such a perspective, we emphasize we can use dataset with custom class. Also, we are implying that it is worth it to grow with M$, especially from now on, to study it in each version.

Then, you may ask, how about the DLING, the real OR mapping. The problem is the “real”. How about Nhibernate. You believe DLING will eliminate NHibernate? I doubt it. They are all OR mappings, with different kind of trade-offs.

This point is extremely important for my own “enlightenment”: when you use a concept like “table module”, be careful; when you study “patterns”, be careful.

I want to repeat this: in VS 2005 typed dataset is a simple but mature OR mapping technology; it is simpler, but not that different from DLING and Nhibernate. The investment in typed dataset can be transferred to future OR mapping technologies.

This is important, because the major reason of ignoring typed dataset is the reversed FUD: we do not know the future of dataset; it changes for each version; so, let’s ignore it.

Because of the huge amount of half-cooked technologies from M$ in the past, such attitude has its merits. The problem of it is that by it alone we do not know when to change that attitude. We have to keep an open mind, and continuously “poll”, “peek”, and “test-drive” those half-cooked stuff, to see whether they are ready to be treated seriously.

I am reporting now (I know I should do at least one year ago) that, dataset 2005 should be treated seriously: as a whole, it is not a half-cooked technology any more; you can seriously use it in enterprise computing, because it is a simple but mature OR mapping technology.

Again, it is incorrect to say that dataset is inherently different from “domain model” and therefore inherently differently from OR mapping technologies.

------------------

It is full of compromises; however, it seems that the 2005 version strikes many good compromises.

In short, I will begin to use dataset “positively”, i.e., not just for some hacks, or maintaining “other people’s code”.

Again, I will apply rules I use in “pure” object world: parsing-and-validation (and property level security) and formatting are “business rules”, and must be in “business layer”.

---- more:

I believe anything in memory is an object (this is indeed a fact, instead of a belief).

I also strongly believe in the so-called “anemic domain model”. Looking back, it is so obvious that “anemic domain model” is close to the “simple-minded domain model” – dataset.

In an application, for most parts, we can use dataset; we use custom entity class when it is mostly in-memory (ya, you can create “shorthanded” dataset, but why). Also, as a absolute rule: when we need to use data binding in updatable grid, then we must avoid using custom entity class -- by doing this, we can avoid all the complexity of custom entity class approach (e.g., those collections to support data binding).

By the way, CSLA is the best for custom classes in .net world. Howver, it has two things I always feel of great pain: portal's limitations and strongly typed collections. We can take away portal easily (and the limitations it creates); and we can use dataset to avoid collections. What is left? – Again, the concept that validation/formatting is business logic; not UI logic – yes, this rule is very precious. And its practical meaning is that I can unit test it easily.

OK, that’s all for my convert manifesto ;-)