Software Design Principles & Practices – Testing

Test Driven Development - In test driven development the software is written to a test, if the test fails then the software is either not complete or not correct. In TDD the programmer knows what the result of an action is supposed to be (store or retrieve user information, do a calculation, generate a report, etc.) so they write the test that says the success scenario is x. If they immediately run the test, the test will fail since no software has been written to pass the test. They then write the software to do what they want, and run the test, until the test passes when the produced software meets the requirements of the test. This is useful because at the end the programmer knows the software at least does what was planned from the start.

Abstraction principle (don't repeat yourself) - This principle relates to code only being written once and shared between objects, instead of repeating code to replicate functionality across multiple objects. If code is repeated and something needs to be changed, it means that everyplace the code is replicated will need to be updated and changed. This makes it very easy to fail to make all changes to every piece of software since one component could be missed. It also opens up the possibility for more mistakes, as the more code that needs to be changed the more likely it is that a mistake will be made and not noticed. All of this could result in code that doesn't compile properly, or software that doesn't work right either doing things incorrectly, or crashing and not really working at all. When code only exists in once place and is used for multiple components, it means that it only has to be updated once for everything to change.

Single responsibility principle - Under this principle an object should only be responsible for a single unit of work. If an object has multiple responsibilities they could negatively impact one another causing the software to fail. This is similar in a way to the abstraction principle, since a single responsibility makes the class more robust as it can be applied whenever it's singular function would be useful. With multiple responsibilities a class may do too much to be useful in a certain situation, another class would have to be created. Furthermore if there are two responsibilities and one of them needs to be updated or changed, this could cause the other task to no longer work either correctly or at al.

Separation of concerns - Not entirely distinct from SRP and DRY, SoC is where a computer program is broken up into specific features while attempting to minimize any overlap in functionality. This makes it easier for work to be done on individual pieces of the system, helps with re-usability and maintainability, improves understanding of the system, and makes it easier for new features to be added in the future.

 

Software Quality

Planning - One of the most important areas to focus on in order to achieve software quality is in the initial planning. Getting a good idea of the user requirements up front and developing strong use cases will help the team to efficiently create reliable software. Even if they can't get all the specific requirements up front, even having a rough idea of what the completed software will need to accomplish will help ensure it is flexible enough to change or grow to meet the needs of the customer.

Testing - Regular testing throughout the development process should help in eliminating bugs, as well as verifying that the software meets the set requirements. The benefits of regular, on-going testing have a positive impact on the development process in that any problems are discovered and can be fixed early on during the process, instead of needing to do huge amounts of fixes right before delivering the software, and make it easier to present real, working software to the customer throughout the development process for feedback.

Feedback/Customer Involvement - Getting feedback from the customer during development will help improve software quality. Having the customer involved in the process means that the development team can better understand what the customer is expecting, and can make tweaks or changes to individual components early on. As components come together to form the full software package, the customer can get a better idea of what the team is producing and bring up any concerns or changes during the development process so changes can be made. If the customer was not involved until the end, the software would be delivered and it may be too difficult, costly, or time consuming to make changes that would improve the quality of the softare at that time.

Define and Measure - The development team should not only strive for quality software, but should have a definition of quality and metrics they can use to measure success. This is often done by creating a weighted scoring system using 5 characteristics identified by The Consortium for IT Software Quality (CISQ) - reliability, efficiency, security, maintainability, and size. Reliability, efficiency and security mostly relates to using good practices for architecture and coding. Maintainability has to do with the ability of a new development team to understand, change, test, and reuse the software/code when the responsibilities are transferred from the original developers and has a lot to do with documentation, programming practices, and the original teams approach to complexity. Size relates to both the actual size of the application (lines of code, file size, databases, etc.) and the functional size (functionality, inputs, outputs, data, etc.)

Verification vs. Validation

Software verification and validation are the two primary components of software quality control. Software verification is verifying that the software is being put together correctly, and that the software can successfully and correctly do what it is supposed to do. Software verification happens all throughout the software development process as code is tested to ensure it works properly. Software validation, on the other hand, is the process of verifying with the customer that the application meets expectations. This happens before any code is written and makes sure the needs and expectations of the customer are understood prior to actually creating any software.

If accounting software was being developed, for example, validation would be understanding what the customer needs out of the accounting software and how it is going to be used, where verification is the process where the software is tested to ensure it is performing calculations correctly, storing and retrieving information from the correct places, categorizing things accurately, etc.

Design Process

Why is it important to have your choice of principles, patters, and architectural decisions identified “PRIOR” to construction?  How does or does not this conflict with the agile development methodology?

Making strong decisions about principles, patterns, and architecture before beginning construction of software makes the rest of the processes that much easier to complete. Principles and patterns seem to really be the same as good programming (and software design) practices, and the architecture seems to be more about planning a general framework for the rest of the project.

Not doing this could result in a lot of rework if two different components are being worked on by two different people who take different approaches and forgot to account for the ways that those components need to interact with one another. Alternatively work could be duplicated across components, and other regular principles could also be violated. Finally without strong decisions up front the final software may not be modular enough to support updates, changes, or additions in the future without serious rework.

I do not believe this conflicts with agile methodologies whatsoever. Lots of up front planning to establish good, strong decisions about these items actually seems pretty par for the course for this methodology. It's almost more important in agile since individual components will be developed one at a time - this means that the team must make sure up front that they all follow the same principles and architecture so everything will fit together in the end.

Interfaces

1. Name 3-4 design principles/practices that an interface helps support.
 
Single responsibility principle - A class should only have one responsibility. If there are multiple classes with different responsibilities, but each class utilizes the same method, then an interface can support this. For example an alarm clock, watch, cell phone, wall clock, and clock on a computer all have different responsibilities, but no matter which one of those you 'grab' it still has to tell time, so "Time" could be an interface that applies to all the subclasses.
 
Dependency inversion principle - High-level modules should not depend on low-level modules, but should depend on abstractions. Abstractions should not depend on details, details should depend upon abstractions. High-level and low-level components need to be separated in different libraries, with the behavior of the high-level component defined by interfaces. The low-level component then depends on the high-level interface to know what to do. A good example would be a car - instead of coming up with all the details on how pressing the accelerator makes the car go one could say "the car needs to stop, the car needs to go, the car needs to turn" then each subclass (accelerator, brakes, steering wheel, wheels, engine) can be detailed based on the higher level interface decisions.
 
Interface segregation principle - Interfaces should only contain methods it will actually use, and customers/users should never be forced to depend on a method that is not used. Without interfaces this principle would not exist. If an interface is responsible for more than one thing (see single responsibility principle) it should be broken down into more specific, individual interfaces. This will result in fewer issues or complications as a system is modified, updated, refactored, or becomes more complicated over time.
 
 
2. Why is it better to "program to the interface" and not "the concrete" class?
 
An interface is really defined by an abstract class, without including any implementations of the underlying functions. As an abstract class the interface is really a base class from which other classes can be derived. This new derived class (sub-class) will contain the actual implementation of the functions in the base class. This newly derived sub-class is a concrete class.
 
If you are programming to the concrete class and something changes, then each of those concrete classes will need to be updated as well. This could result in a lot of work and numerous changes if there are a large number of concrete classes. Furthermore if the implementation of a method is only slightly different across concrete classes you can create a base class containing the common method, which will be invoked from the more specific/individualized information contained within the subclasses - so "animals" would be an abstract class (interface) and individual animals can be subclases with 'concrete' traits. Since all animals eat, "eat" can be another interface with the derived sub-classes providing the 'concrete' details of what the animals eats, how it eats, etc.
 
Or another example (and I really hope I'm understanding this correctly): Instead of writing code to "add to database" for every class and their details, you can create an "add to database" interface, then as long as the class can call the method to add to database, it can use any criteria specific to the class to figure out exactly HOW to add to the database.
 
 
3. What are some other uses for using an interface that may not have been mentioned in the course content?
 
Gives the end user/developer additional flexibility - if, for example, the final product needs to generate some sort of report on a regular basis, but the report might change, or when, or other factors, then creating an interface that can generate reports, and maybe a separate interface to schedule reports, will be useful when the reports themselves change.
 
An interface can be used as 'notes' to ensure that appropriate functionality is included in more specific classes. If an interface will get/set a particular string name, then it's easier to remember to make sure that string is set in any classes that will implement the interface.
 
The use of interfaces creates what is essentially a plug-and-play architecture making it easier to replace or swap individual components. Any interchangeable components will utilize and implement the same interface so it does not require any extra programming to use them.

Domain Model VS Design Model

Why is the domain model not suitable for or intended to be implemented directly in software?

A domain model will provide visual representation of the classes within the domain, the basic relationships between the domain classes, and any key attributes of the classes. However, since the domain model is used by all parties involved including the customer any details related to actual design or implementation are left out. This means that while the model can be used to describe or discuss the behavior of domain classes, no actual behavior is contained within the domain model. Without behavior there is no way to directly implement the domain model into the software, since everything in the model is essentially a placeholder to be used for data when implementing in the software itself.

 

Can you give a high-level example of the differences between the conceptual classes found in the domain model and the functional classes found in the design/data model?  What are the differences?  Please use a specific domain/design model (not from course content or book).

I've been trying to find examples and a better explanation of what this means to understand it better, but material seems to be rather limited. Hopefully I am understanding this correctly. I'm basing a lot of this on an example I found here: http://www.cs.gordon.edu/courses/cs211/AddressBookExample/

The conceptual classes in the domain model represent the classes and their generalized relationships with each other. The functional classes in the design/data model represent how those relationships actually work, without the actual technical code that facilitates the interactions.

So, if you were building an address book the domain model might say:

  • Address book - relates to entry/people
  • Entry/People - relates to name, address, phone
  • Name - attributes: firstname, lastname
  • Address - attributes: number, street, city, state, zip
  • Phone
  • Database - relates to address book and entry/people
However the design model:
  • Address book - creates new entry or displays existing entry, sorts by name, address, or phone, stores entries in database, retrieves entries from database
  • Entry/people - contains name, address, phone, allows these fields to be created, deleted, or changed.
  • Database - stores entries including specific attributes
The design model is going to spell out how everything works together, how things should be connected, and how they actually interact with each other. This may include more classes than the domain model because there may be more classes necessary to perform the requested tasks than detailed in the domain model since they are specific to the technical design of the application. The actual responsibilities of the classes should also be determined and laid out in this step. The address book object is responsible for keeping track of and displaying people objects, the database is responsible for storing data related to objects, people objects are responsible for displaying the attributes associated with the person, a comparator object is responsible for sorting the address book, etc.

Research Paper - Introductions

The Introduction to a research paper generally has a similar structure regardless of the topic.  Examine the academically authoritative sources you have in your bibliography.  In the Introduction you should see

  1. A discussion of the topical area of the study
  2. A discussion of the background for the study.  Often other research is sited in this section of the paper.
  3. A thesis statement; a problem statement; the objective of the paper
  4. A discussion of the organization of the rest of the paper, especially how it is structured to meet the goals in item three.

Do you see this structure in the research papers you have examined?  Is it easier to understand those papers where this is done well?  What does this mean regarding how you will write your research paper?  Please use an ACM/IEEE Conference paper for your work.  Post a link to the paper used for your analysis of these questions.

 

This does seem to be the general structure of the introduction for most papers, though how closely they follow it or how specific each of the components is really depends on the paper and the authors. In the paper I reference, for example, the last paragraph before the thesis does technically lay out the structure for the rest of the paper, but it uses different language than the specific paragraphs in the paper do.

A well written introduction does give the reader a guide through the rest of the paper. When writing research papers I may start with a rough introduction to help keep the content of the paper on track. Once the paper is written I think it is worthwhile to go back to the introduction and make changes or re-write so it better matches the final structure and content of the paper. One thing I need to keep in mind about introductions, after reading those of other papers, is that you don't need to present your entire argument or individual very specific aspects of the argument, but simply an overview of the objective of the paper and how it is going to get there.

 

The Dark Side of Agile Software Development- http://delivery.acm.org.dml.regis.edu/10.1145/2390000/2384612/p215-janes.pdf?ip=207.93.211.102&acc=ACTIVE%20SERVICE&CFID=191764978&CFTOKEN=49229468&__acm__=1352912412_167f1df35bc391de0206e0e23a30366a

Use Case Practice

Apparently I did this incorrectly at first, I've left my original below the break below, but here is a new one.

 

Background

Twice a month mutual fund trades are exectued as part of an automatic investment program (AIP). This generates a number of reports that must be manually reviewed. These reports are automatically printed resulting in hundreds of pages of documentation which wastes paper, must be stored locally, and eventually must be stored off site to meet regulatory requirements. My objective is to have these reports made available electronically in the IBM Content Manager used at the company to reduce the expenses and simplify the current process.

Main Success Scenario

Representative/user opens IBM Content Manager. User logs into Content Manager. User selects and opens appropriate folder related to mutual fund trading reports. User can use default date range, or enter date range for request reports. User selects the report name from drop down list of reports. User searches for reports matching the name and date range. Content manager displays list of reports meeting the criteria. User chooses the desired report and opens. Report displays on screen. User selects Content Manager find function (in document search). User enters desired account number or error code and clicks find. Display jumps to the section of the document containing desired information.

Alternate Success Scenario

Account number or error code not in report - pop-up displays "The string '(search criteria)' is not contained within this document."

Requested report does not exist in date range - pop-up displays "No documents meet the search criteria."

 

 

_______________________________________________________________________________

I'm not sure if I'm supposed to explain the use case as it is currently, or describe what I think is wrong/ways this can be improved. I guess I'll do both:

Main Success Scenario

Client requests change to mutual fund dividend and capital gains instructions (to pay as cash, to reinvest, or split). Representive fills out service request with information including customer account number, fund symbol, and requested change. Service request generates system request. Mutual fund representative runs macro. Macro reviews system requests and submits changes, system requests are marked as resolved. Changes are processed overnight during batch update.

Alternate Scenarios

Fund not in account - Macro notes system request that fund is not in account, marks as resolved.

Open order for fund - Macro notes that changes cannot be processed with open order, request is postponed to be run on the following day.

Pending dividend/cap gain payment - Macro notes pending payment, request is postponed to be run on the following day.

Fund already in requested status - Macro notes that fund is already up to date, marks as resolved.

Transaction entered/occurs after changes submitted - Macro notes that request has been processed, transaction occurs and voids out any changes due to batch processing, results in system request marked as resolved, but update not taking place.

Potential Improvements

Ability for client to enter requests themselves online, this requires mainframe level upgrades/improvement.

Ability for point of contact representative to enter changes themselves, this requires updates to system to track changes, and requires additional training for representatives both on how to process the changes, and on current system limitations/issues that will cause delays or result in failure to process changes.

System checks requested change against current status, blocking requests and notifying requestor when the fund is not in the account or the fund is already in the requested status.

Improve how changes are processed so they are handled immediately instead of being processed during batch in order to reduce the number of failed requests. Alternatively include notification stating that requested changes will be voided if other transactions take place or are entered.

Use Case Models - Uses Beyond Requirements?

How do you feel about the process of writing use case's?  Are they helpful?  In what way?

I think that the process of writing use cases is extremely important to the final project being successful and useful. While the user story gives a generalized description of a requirement, a use case actually describes the steps required to accomplish the requirement. They are very helpful in that they do not only describe what something should do, but present the indvidual actions and aspects that need to be addressed so it does what it should.

 

Are use case models a requirements analysis tool? What potential uses are there for Use Cases beyond documenting system requirements?

Use case models are absolutely a requirements analysis tool. The user story might say "open a new account - customer chooses new account option to enter their personal information and create a user ID" but the use case would include clicking a new account link/button, blank form with the following fields..., validates username is not in use, validates password meets requirements, clicking submit, validating that required fields are complete, adding user to database, email customer confirmation of account, showing success page, etc. The use case model helps the developer by ensuring that all the expectations of the customer for this requirement are met.

A use case can be very useful for making sure the software is actually usuable, intuitive, and makes sense to the end user. Knowing the specific requirements is one thing, but understanding why those requirements exist and how the tool will be used should impact the design. There may be two things that seem unrelated so developer may not consider connecting them, but the way it is actually used would require them to be tied together.

For example on a banking website in Bill Pay new payments are a whole separate thing from pending transactions, but if a developer failed to keep in mind that it's important to be able to see your current balance less any pending outgoing transactions it could result in a customer having to click between balances, pending, and new to see what they need, or even cause problems down the line if customers accidentally overspent their account because they could not easily see their current balance when setting up an outgoing payment.

UML and "Software Modeling"

 

1) In your own words, what does it mean to “model” software? 

Modeling software is simply the process of diagramming what the software needs to do and how it should work. This is very top level and fairly abstract. If you know a piece of software needs to facilitate 10 separate tasks, and each of those tasks requires specific inputs, processing of those inputs, and producing some sort of output, modeling allows all of these tasks and processes to be represented without any actual coding being done. This makes it possible to verify that all necessary processes are included, and easily add, remove, reorganize, or modify processes without painstakingly fixing actual code.

 

2) What value do you see in using a tool, such as UML, to do software modeling? What purpose does it serve?

UML is really no different from any other flowchart, but the benefit is that UML is a standard that can help ensure that modeling is consistent across a project. This also means that an individual working with one development team can move to another team, or even another software development firm, and be caught up more quickly and easily because they already understand the language used to explain the scope and requirements of a project. A very well developed UML can also, theoretically, be tied into a programming language and used to create a generalized framework and 'containers' to eliminate or simplify manual coding. It is important not to rely on UML too much, however; just as it may be impossible for a customer to provide the exact specifications and requirements for a software project before the project has even begin, UML may be useful in providing a generalized framework but not robust or dynamic enough (alternatively too limited) to keep up with changes at the code level.

 

3) Drawing from a real-life experience, what correlations, to the software modeling process, can you make from a particular non-software planning situation?

I can think of a number of examples.

In college I worked at Blockbuster and they were making some changes to the layout of the store. Before moving all the shelving and movies we diagrammed everything out. This gave us the flexibility to review it and change things before starting instead of increasing the amount of manual labor required to get everything how we wanted it.

Before a movie is made it is storyboarded out. This helps establish the framework for how the movie (or more specifically a scene) should progress, and even gives examples for how the shots should look. It would be very costly to continuously re-shoot to determine the framing for a shot, so this alleviates some of these challenges.

Another very basic example was when I was planning out my home theater system. Knowing what equipment I had and what connections were available I diagrammed the wiring to determine what I needed to purchase and how everything should be hooked up. This was especially helpful when I was installing wiring into the walls so I didn't have to make additional purchases or tear out everything and start over if it didn't meet my requirements.

 

Academic Authority of Information

 

1) How would one define or ascertain that an article is from an academically authoritative source? In other words, what specific properties make an article academically authoritative?

An authoritative source is one that is credible and applicable. Often this will include an expert in a particular field, however this alone does not make it academically authoritative. There are many blogs and podcasts maintained by respected experts, but often these may contain opinion, speculation, and conjecture, limiting their ability to be used as an academically authoritative source. Articles from peer reviewed journals are going to be highly academically authoritative as they have not only had to meet the (typically) stringent requirements of the journal in order to be published, but have also had to be reviewed and approved by other experts in the field. Additionally these types of articles generally have extensive citations demonstrating the care and research that has gone into the paper, but furthermore gives the reader the ability to review the sources to determine if the conclusions reached in the article are credible.

2) For the benefit of the class in their search for academically authoritative sources of articles, please list 2-3 sources which you feel are academically authoritative.

The easiest way to search for academically authoritative sources of information is to go to the Regis library website and utilize the research tools available there as this provides access to numerous journals and libraries. Academic Search Premier provides access to nearly 4,700 journals. The ACM Digital Library provides access to ACM journal and articles. Computer Database has articles from 350 computer industry periodicals. The IEEE Magazine Collection provides full text articles from 14 of the comptuer science periodicals published by the IEEE.

 

Research Paper Discussion

 

1) What is a position paper?

A position paper is used to generate support for an issue. A position on a topic and the rationale for that position is explained. Though the position is an opinion, research and evidence are gathered to provide facts to support the position. While the point is to validate the position, it is important to examine the strengths and weaknesses of the position, and explain ways in which the position could be strengthened.

2) What is a research paper?

A research paper begins with an idea where the validity is not known and will be determined by the research. A hypothesis is formed, which is then researched. The research is analyzed and explained, which then either confirms or fails to confirm the hypothesis, which then results in the creation of the thesis for the paper.

3) How are they similar?

Both papers begin with the author having an idea to utilize for the paper, and both types of papers require synthesis of research which is then presented to the reader.

4) How do they differ?

With the position paper there is no exploration to discover the conclusion - the conclusion is decided from the beginning. In a research paper the writer makes a supposition as to what the conclusion will be, but the conclusion is not decided until all the research has been analyzed and presented. The research for a position paper is restricted to that research which supports the original position. For a research paper it is the responsibility of the author to research and present all sides of an issue, then use that research to come to a well defended conclusion even if the conclusion is disparate from the original expectation.

5) What is an assertion/thesis statement?

A thesis statement is a brief sentence or two that establishes what the rest of the paper will be about. It should present what the author intends to prove based on the conclusions gathered from the research, and a strong thesis is an argument that could reasonably be disputed.

6) Give an example of a valid assertion.  (You can use the topic you plan on writing about, or make one up.)

Software design projects will experience greater success and better meet client expectations if the software design methodology utilized by the project incorporates incremental and iterative steps.

 

Organizing Knowledge

 

The Software Engineering Body of Knowledge at the SWEBOK website and the UML Resource Page at the Object Management Group’s website contain much information. What strategies might be used to organize ones investigation of these websites?

Examining these websites was actually a little overwhelming due to the sheer quantity of information contained within. My strategy was to look over the home page and skim through the navigation options reading the sub-headings for each of the main sections. Next I looked for any sort of FAQ or guide to get a good general overview of the site and the information contained therein. After that the next step was to click on any sections that sounded interesting or were unclear. Simply by exploring the sites I was able to get a better handle on what information was contained within the different sites. Now that I have a general idea if I ever need to use either site for specific information I will either use the navigation system or even just the search bar to find whatever I am looking for.

 

IEEE Standard for Software Project Management Plans

 

In your own words, How would you describe Agile Software Development?

I believe Agile Software Development has less to do with the software development itself and more to do with the relationship between the customer and the software development team. Agile Software Development means that the software development team attempts to create a base working prototype as quickly as possible then iterates upon this, checking in with the customer frequently to make sure the project is going to meet the customers needs and expectations. In doing so they commit to not only regular iterations, but regular testing, and construct individual components of the project one at a time. At each phase of the project they are able to present a tested, workable example to the client to ensure they are able to get effective feedback. This process not only means that they do not show up at the end of the project cycle with and end product that does not meet the clients expectations, but gives the team more flexibility to modify and alter aspects of the software if the clients needs or expectations change along the way.

 

Does following a Software PMP make a software development project less Agile?

It really depends how the SPMP is put together, and how it is utilized by the development team. A SPMP that demands all requirements be stated up front to establish a plan with hard, specific deadlines that is then to be followed by the letter is certainly going to result in a less agile project. However a SPMP that recognizes the importance of iteration, frequent testing, workable components even if they are rough and early, and regular check ins with the client to not only demonstrate what has been completed, but to ensure the project is living up to the clients expectations will allow a team and project to be very agile. A SPMP is actually very important for agile projects, because it would be very easy for a project to continue being dragged out, miss deadlines, and incur more and more costs if expectations or requirements are always changing and there is not a higher level plan in place to make sure the project as a whole is on track with time and budgetary restraints. Additionally even though an agile project and team will result in a end product that better meets the clients needs in the end, it is important for project management to have an overall picture of what the client would like to accomplish so reasonable timelines and cost estimates can be made - this will help ensure the team is on the right track by establishing what needs to be created (front end and individual components, for example) and the order in which these components should be addressed based on the relative difficulty of completion to the importance to both the project and the end client.

 

Software "Engineering" vs. "Craftsmanship"

The Software Engineering Body of Knowledge (SWEBOK) adopts an "engineering" approach to software development, obviously.  This stands, at least somewhat, in contrast to Software Craftsmanship.

The word "engineer" implies a cold, scientific, procedural approach to creating something. This is the components, the nuts and bolts, of a project. To talk about engineering software is to talk about developing and putting together the specific lines of code so the software can do the task required - when completed software that has been engineered is perfectly servicable to do what it was designed to do.

'Crafting' implies a certain level of skill and finess - there is an art to crafting something. Crafting software suggests that it provides a good, intuitive user experience. That the software is pleasant to look at, and pleasant to use. When software is crafted the things a user wants to accomplish are easily accessible and require the minimum number of steps or amount of effort. From the perspective of a user, software that is crafted just feels right.

Software must be both crafted and engineered, especially for more complex software developed by larger teams. The head of development may be a craftsman, someone who has a vision for the project and wants to make sure the final product is everything they hoped it would be. They oversee the engineers who are responsible for putting together specific aspects of the project - these individuals may not care that rounded corners are more asethetic, but only know that their responsibility is to engineer rounded corners. Think of the relationship between Steve Jobs, Jony Ive and Foxconn - Jobs had a vision and wanted to craft each product in a very specific way, Ive had to figure out how to design and build these products as both a craftsman and engineer, while Foxconn has to put the pieces together as they are told to make them work (engineer).

Crafting something takes a lot more time and energy and may not necessarily add substantial benefits over engineering something depending on the project. Crafting often results in some aspect to be handled in a certain, specific way that may not always be feasible from the technical side. Engineering is often about finding a solution, any solution, to a problem, while crafting may be more focused on a specific solution. If this specific solution is more difficult to engineer this can cause conflict between the ideas of engineering and crafting.

IT Value and Risk

Assignment:

In the process of managing IT, organizations must assess and reevaluate the technology path it chooses to follow in delivering value (both to internal and external customers). Inherent in this process is the assumption of risk, and consequently, what happens with unfavorable outcomes. As a reply to this post identify some of the risks, what the impact may be, and how an organization can overcome unfavorable outcomes.

 

While there may be any number of risks for any decisions made at an organization, when it comes to managing IT I believe the risks really fall into two categories. The first category covers making decisions and upgrades or changing the system too slowly, or not making decisions at all and being stuck in maintenance mode. In this category of risk IT systems can become tremendously outdated making it even more difficult to upgrade or switch to a new system in the future. There is also the risks associated with falling behind competitors and the firm is no longer able to be competitive in terms of technology. This could mean that processing takes longer, or clients can't do what they want online, or employees have to find work arounds to complete tasks. The second category includes making decisions too quickly, creating quick fixes instead of long term planning, and even trying to stay up with the cutting edge technology. In this category one risk is implementing a new system that is a poor fit, and has to be replaced much sooner than if additional time had been taken when deciding what technology to utilize. The tech may not meet company standards or requirements, might become obsolete too quickly, might not integrate with other existing systems or be expandable to meet the future needs and requirements of the business. This aligns with the risks associated with IT trying to implement quick fix solutions that only have a minor impact on the business. When this is done the resulting systems and applications may be numerous and now require IT to spend more time and resources on maintenance, which detracts from their ability to be flexible, continuously make moderate upgrades, and keep up with the changing requirements of the organization. Given these two categories of risk it is important for an organization to have strong IT management who can maintain consistent pacing with gradual upgrades and changes while always working to keep up with the needs of the business.

IT Savvy

Assignment:

As a reply to this message, discuss what an IT savvy firm is, the relevance of being IT savvy, and key leadership issues involved in managing one.

 

An IT savvy firm is about more than simply being on the cutting edge of technology. A firm is IT savvy when they see the value of technology and harness it to be a better business, which is one of the key leadership difficulties in IT management. IT must support the business, make business processes easier, data transmission more consistent, and at the end of the day help the firm to spend less time and money engaging in every day business, but also when making IT investment decisions and other business decisions. An IT savvy firm may not always be on the cutting edge, but they are looking at what is the cutting edge and making plans about how that could improve or be implemented into the business if the technology either takes off, or if it is determined the technology will be able to better serve the business. IT management is a balancing act between maintaining current systems, ensuring they are expandable for the future, examining new technology and making decisions about whether it should be implemented or not, while simultaneously working with the business to understand and meet their needs, and helping the business to remain competitive. The better an organization utilizes IT to meet their objectives, stay flexible and make changes as necessary the more IT savvy that firm is.

Challenge of IT Management

Assignment:

In the end, information is a resource used to make decisions and effect results in an organization. The thoughtful and considered application of technology to maximize the value of that resource is the challenge of IT Management. Do you agree or disagree? Why?

 

I agree with this assessment to a point, but it seems to attribute only a single challenge to IT management. Certainly it is important for IT management to use technology to maximize the value of information. This might include mining customer data, utilizing information to make decisions about how resources should be allocated and assisting the firm with decisions about where to invest, or how to modify and improve the business. Even being able to quickly and easily access needed data for day to day functions is a responsibility of IT. That being said IT management cannot only be focused on using technology to MAXIMIZE the value of information, since IT has become such an important component of running an organization it is important for IT management to simply make sure tools are working and available, that data is being transmitted swiftly, IT management also has to keep up with the technological advances of competitors, or even just make sure the way the company utilizes technology (either internally, or with external customers) is modern and up to date. We have seeen some arguments that IT no longer gives a competitive advantage, but a lack of solid, modern technology is definitely a competitive disadvantage.

Strategic Information Technology: Case Study 3

Assignment:

 

1. What evidence is the CEO using to suggest that Genex is not using technology competitively? Does this provide a valid argument?

The CEO's primary evidence is the perception that every time he inquires about new technology he is only given reasons it would not be possible to implement. He feels that in order to use technology competitively the company has to get ahead of the curve. He also makes sub points that they need to behave as if they are one business, and that the IT organization needs to be more agile and responsive as they were in the past. I'm hard pressed to say that being told certain new technology cannot be implemented means that the organization is not using technology competitively; it is certainly possible for an organization to stay competitive without always using the newest cutting edge technology and in some cases always trying to implement new technology can actually be detrimental to a firm. The idea of the company needing to get ahead of the curve indicates the CEO wants Genex to be an IT leader, which means making potentially risky decisions to implement new technology before it is proven - this maybe reasonable in some cases or some of the time, but is a poor way to attempt to use technology competitively. Behaving as one business makes sense, especially given the disjointed nature of the IT systems since they were selected on the basis of individual areas of the company instead of being chosen to help the firm work together and communicate better between business units. This leads directly into the inability of IT to be agile and responsive, and changes to help enhance this could certainly help the company.

2. What is your recommendation for a strategy to successfully implement enterprise wide systems (such as SAP) at Genex?

All the disparate systems used by each group needs to be inventoried including a clear, concise description of the processes that the system facilitates. With this list in hand it will be possible to ensure that SAP meets the requirements of each business unit, and if there is missing functionality alternatives can be researched or it can be determined if there are modules that will fit into SAP to add that functionality. It will also be necessary to find and select a methodology by which all the old data can be imported into the new SAP system; obviously anything that can be done in an automated fashion would be preferred, but if there is data that has to be manually transferred it is important to find the fastest, most effecient way to do so.

Reinventing the IT Department

Assignment:

The IT department is no different from an organization, in that it must continually re-evaluate and sometimes reinvent itself to continue to deliver value to the organization. As a reply to this message, discuss how IT must re-evaluate its capabilities, people skill sets and other resources to be in alignment with overall organizational objectives. Include any examples from the marketplace in your discussion.

 

It seems to me that IT is always in a state of change, updating systems to be in compliance with the latest laws or regulations, modifying, updating or consolidating tools and platforms to make things more compatible, easier to use, or just reduce the number of overall systems, and to keep up with competition. In my organization I can think of a few examples from the past couple years. Legacy systems are slowly, slowly being switched over to a new system that makes more sense to new reps - the old system runs in DOS windows, the new one is a desktop application. As features are able to be moved over, they are eliminated or restricted on the legacy systems. One application used for digital imaging of paperwork that tied directly into a work queue and request system was just switched over so it is no longer a stand alone application, but built in to the mainframe which should make compatibility with the work queue and request system easier, and by being part of the mainframe means that processing is easier and uses fewer resources; it also means that instead of using 2 or 3 applications for a single task it is now all rolled into one. My firm provides an application for clients that is an advanced trading platform; about a year ago a new updated platform was released that got us back to parity with our competitors - the old version certainly worked but in terms of appearance and some functionality it was lagging behind what other firms were offering. Even with these updates and changes nothing is "finished" it's always a work in progress as the systems infrastructure might change, the business side might need new features that require changes, and software used by customers needs to always be updated in order to stay competitive.