First, know your goal. Start by writing down your major goal. Your major goal is the ultimate thing you'd like to see happen. For example, "I want to make honor roll," or "I want to get fit enough to make the cross-country team," or even, "I want to play in the Olympics" are all major goals because they're the final thing the goal setter wants to see happen (obviously, some goals take longer and require more work than others). It's OK to dream big. That's how people accomplish stuff. You just have to remember that the bigger the goal, the more work it takes to get there.
Make it realistic. People often abandon their goals because their expectations are unreasonable. Maybe they expect to get ripped abs in weeks rather than months, or to quit smoking easily after years of lighting up.
Then set specific daily tasks, like eating five servings of fruit and veggies and running a certain amount a day. Put these on a calendar or planner so you can check them off. Ask a coach to help you set doable mini-goals for additional mile amounts and for tasks to improve your performance, such as exercises to build strength and stamina so you'll stay motivated to run farther.
It helps to write down your small goals in the same way you wrote down your big goal. That way you can track what you need to do, check off tasks as you complete them, and enjoy knowing that you're moving toward your big goal.
Writing down daily tasks and mini-goals helps here too. By keeping track of things, you'll quickly recognize when you've slipped up, making it easier to refocus and recommit to your goal. So instead of feeling discouraged, you can know exactly where you got off track and why.
UMKC Information Services (UMKC IS) limits the bandwidth for Peer-to-Peer (P2P) file-sharing as part of the overall Network Policy. The Network Policy is in place to provide a reliable network for the University community to use in pursuit of the goals and mission of the University.
P2P file-sharing can be defined as a technology enabling users to share communications, processing power, and data files with other users. P2P, if used properly, can prove beneficial to the end users. However, there are numerous risks involved with the use of this technology.
P2P technology basically began with Napster in 1999 as a method for users to share MP3 files (digital music) over the Internet. P2P technology uses a system of end-user computers that facilitates the transfer of digital information. P2P falls into two models, Napster and Gnutella, which have many variations. Both models do not use the classic client-server configuration but a client-client configuration. The significant difference in the two models is the Napster model maintains a master list of files and users while the Gnutella model has no such list.
The P2P models are the basis for the Peer-to-peer networks in use throughout the Internet. The networks provide users with very quick searches for desired content and extremely fast downloads of that content. The ability of P2P networks to quickly supply files and the anonymity of the users leads to the abuses that damage local area networks. The LAN damage ranges from minor glitches to system wide failures. The major concern is the point where the use of P2P file-sharing impedes the use of the UMKCnet in the goals and mission of the University.
The situation changed drastically in the early 1980s when Digitaldiscontinued the PDP-10 series. Its architecture, elegant andpowerful in the 60s, could not extend naturally to the larger addressspaces that were becoming feasible in the 80s. This meant that nearlyall of the programs composing ITS were obsolete.
A third assumption is that we would have no usable software (or wouldnever have a program to do this or that particular job) if we did notoffer a company power over the users of the program. This assumptionmay have seemed plausible, before the free software movementdemonstrated that we can make plenty of useful software withoutputting chains on it.
If we decline to accept these assumptions, and judge these issuesbased on ordinary commonsense morality while placing the users first,we arrive at very different conclusions. Computer users should befree to modify programs to fit their needs, and free to sharesoftware, because helping other people is the basis of society.
I had already experienced being on the receiving end of anondisclosure agreement, when someone refused to give me and the MITAI Lab the source code for the control program for our printer. (Thelack of certain features in this program made use of the printerextremely frustrating.) So I could not tell myself that nondisclosureagreements were innocent. I was very angry when he refused to sharewith us; I could not turn around and do the same thing to everyoneelse.
An operating system does not mean just a kernel, barely enough to runother programs. In the 1970s, every operating system worthy of thename included command processors, assemblers, compilers, interpreters,debuggers, text editors, mailers, and much more. ITS had them,Multics had them, VMS had them, and Unix had them. The GNU operatingsystem would include them too.
Because of these decisions, and others like them,the GNU system is not the same as the collection of allGNU software. The GNU system includes programs that are not GNUsoftware, programs that were developed by other people and projectsfor their own purposes, but which we can use because they are freesoftware.
Hoping to avoid the need to write the whole compiler myself, Iobtained the source code for the Pastel compiler, which was amultiplatform compiler developed at Lawrence Livermore Lab. Itsupported, and was written in, an extended version of Pascal, designedto be a system-programming language. I added a C front end, and beganporting it to the Motorola 68000 computer. But I had to give thatup when I discovered that the compiler needed many megabytes of stackspace, and the available 68000 Unix system would only allow 64k.
If a program is free software when it leaves the hands of its author,this does not necessarily mean it will be free software for everyonewho has a copy of it. For example, public domainsoftware (software that is not copyrighted) is free software; butanyone can make a proprietary modified version of it. Likewise, manyfree programs are copyrighted but distributed under simple permissivelicenses which allow proprietary modified versions.
The requirement that changes must be free is essential if we want toensure freedom for every user of the program. The companies thatprivatized the X Window System usually made some changes to port it totheir systems and hardware. These changes were small compared withthe great extent of X, but they were not trivial. If making changeswere an excuse to deny the users freedom, it would be easy for anyoneto take advantage of the excuse.
A related issue concerns combining a free program with nonfree code.Such a combination would inevitably be nonfree; whichever freedomsare lacking for the nonfree part would be lacking for the whole aswell. To permit such combinations would open a hole big enough tosink a ship. Therefore, a crucial requirement for copyleft is to plugthis hole: anything added to or combined with a copylefted programmust be such that the larger combined version is also free andcopylefted.
We funded development of these programs because the GNU Project wasnot just about tools or a development environment. Our goal was acomplete operating system, and these programs were needed for thatgoal.
Selling copies of Emacs demonstrates one kind of free softwarebusiness. When the FSF took over that business, I needed another wayto make a living. I found it in selling services relating to the freesoftware I had developed. This included teaching, for subjects suchas how to program GNU Emacs and how to customize GCC, and softwaredevelopment, mostly porting GCC to new platforms.
In addition, we rejected the Unix focus on small memory size, bydeciding not to support 16-bit machines (it was clear that 32-bitmachines would be the norm by the time the GNU system was finished),and to make no effort to reduce memory usage unless it exceeded amegabyte. In programs for which handling very large files was notcrucial, we encouraged programmers to read an entire input file intocore, then scan its contents without having to worry about I/O.
It is not a matter of principle; there is no principle that saysproprietary software products are entitled to include our code. (Whycontribute to a project predicated on refusing to share with us?)Using the LGPL for the C library, or for any library, is a matter ofstrategy.
One system is an exception to this: on the GNU system (and thisincludes GNU/Linux), the GNU C library is the only C library. So thedistribution terms of the GNU C library determine whether it ispossible to compile a proprietary program for the GNU system. Thereis no ethical reason to allow proprietary applications on the GNUsystem, but strategically it seems that disallowing them would do moreto discourage use of the GNU system than to encourage development offree applications. That is why using the Lesser GPL is a goodstrategy for the C library.
For other libraries, the strategic decision needs to beconsidered on a case-by-case basis. When a library does a special jobthat can help write certain kinds of programs, then releasing it underthe GPL, limiting it to free programs only, is a way of helping otherfree software developers, giving them an advantage against proprietarysoftware. 2b1af7f3a8