-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
EC2 Default User
committed
Jan 7, 2020
0 parents
commit babe1a8
Showing
230 changed files
with
9,476 additions
and
0 deletions.
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
2,362 changes: 2,362 additions & 0 deletions
2,362
.ipynb_checkpoints/2_Plagiarism_Feature_Engineering-checkpoint.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
# Plagiarism Project, Machine Learning Deployment | ||
|
||
This repository contains code and associated files for deploying a plagiarism detector using AWS SageMaker. | ||
|
||
## Project Overview | ||
|
||
In this project, you will be tasked with building a plagiarism detector that examines a text file and performs binary classification; labeling that file as either *plagiarized* or *not*, depending on how similar that text file is to a provided source text. Detecting plagiarism is an active area of research; the task is non-trivial and the differences between paraphrased answers and original work are often not so obvious. | ||
|
||
This project will be broken down into three main notebooks: | ||
|
||
**Notebook 1: Data Exploration** | ||
* Load in the corpus of plagiarism text data. | ||
* Explore the existing data features and the data distribution. | ||
* This first notebook is **not** required in your final project submission. | ||
|
||
**Notebook 2: Feature Engineering** | ||
|
||
* Clean and pre-process the text data. | ||
* Define features for comparing the similarity of an answer text and a source text, and extract similarity features. | ||
* Select "good" features, by analyzing the correlations between different features. | ||
* Create train/test `.csv` files that hold the relevant features and class labels for train/test data points. | ||
|
||
**Notebook 3: Train and Deploy Your Model in SageMaker** | ||
|
||
* Upload your train/test feature data to S3. | ||
* Define a binary classification model and a training script. | ||
* Train your model and deploy it using SageMaker. | ||
* Evaluate your deployed classifier. | ||
|
||
--- | ||
|
||
Please see the [README](https://github.com/udacity/ML_SageMaker_Studies/tree/master/README.md) in the root directory for instructions on setting up a SageMaker notebook and downloading the project files (as well as the other notebooks). | ||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
File,Task,Category | ||
g0pA_taska.txt,a,non | ||
g0pA_taskb.txt,b,cut | ||
g0pA_taskc.txt,c,light | ||
g0pA_taskd.txt,d,heavy | ||
g0pA_taske.txt,e,non | ||
g0pB_taska.txt,a,non | ||
g0pB_taskb.txt,b,non | ||
g0pB_taskc.txt,c,cut | ||
g0pB_taskd.txt,d,light | ||
g0pB_taske.txt,e,heavy | ||
g0pC_taska.txt,a,heavy | ||
g0pC_taskb.txt,b,non | ||
g0pC_taskc.txt,c,non | ||
g0pC_taskd.txt,d,cut | ||
g0pC_taske.txt,e,light | ||
g0pD_taska.txt,a,cut | ||
g0pD_taskb.txt,b,light | ||
g0pD_taskc.txt,c,heavy | ||
g0pD_taskd.txt,d,non | ||
g0pD_taske.txt,e,non | ||
g0pE_taska.txt,a,light | ||
g0pE_taskb.txt,b,heavy | ||
g0pE_taskc.txt,c,non | ||
g0pE_taskd.txt,d,non | ||
g0pE_taske.txt,e,cut | ||
g1pA_taska.txt,a,non | ||
g1pA_taskb.txt,b,heavy | ||
g1pA_taskc.txt,c,light | ||
g1pA_taskd.txt,d,cut | ||
g1pA_taske.txt,e,non | ||
g1pB_taska.txt,a,non | ||
g1pB_taskb.txt,b,non | ||
g1pB_taskc.txt,c,heavy | ||
g1pB_taskd.txt,d,light | ||
g1pB_taske.txt,e,cut | ||
g1pD_taska.txt,a,light | ||
g1pD_taskb.txt,b,cut | ||
g1pD_taskc.txt,c,non | ||
g1pD_taskd.txt,d,non | ||
g1pD_taske.txt,e,heavy | ||
g2pA_taska.txt,a,non | ||
g2pA_taskb.txt,b,heavy | ||
g2pA_taskc.txt,c,light | ||
g2pA_taskd.txt,d,cut | ||
g2pA_taske.txt,e,non | ||
g2pB_taska.txt,a,non | ||
g2pB_taskb.txt,b,non | ||
g2pB_taskc.txt,c,heavy | ||
g2pB_taskd.txt,d,light | ||
g2pB_taske.txt,e,cut | ||
g2pC_taska.txt,a,cut | ||
g2pC_taskb.txt,b,non | ||
g2pC_taskc.txt,c,non | ||
g2pC_taskd.txt,d,heavy | ||
g2pC_taske.txt,e,light | ||
g2pE_taska.txt,a,heavy | ||
g2pE_taskb.txt,b,light | ||
g2pE_taskc.txt,c,cut | ||
g2pE_taskd.txt,d,non | ||
g2pE_taske.txt,e,non | ||
g3pA_taska.txt,a,non | ||
g3pA_taskb.txt,b,heavy | ||
g3pA_taskc.txt,c,light | ||
g3pA_taskd.txt,d,cut | ||
g3pA_taske.txt,e,non | ||
g3pB_taska.txt,a,non | ||
g3pB_taskb.txt,b,non | ||
g3pB_taskc.txt,c,heavy | ||
g3pB_taskd.txt,d,light | ||
g3pB_taske.txt,e,cut | ||
g3pC_taska.txt,a,cut | ||
g3pC_taskb.txt,b,non | ||
g3pC_taskc.txt,c,non | ||
g3pC_taskd.txt,d,heavy | ||
g3pC_taske.txt,e,light | ||
g4pB_taska.txt,a,non | ||
g4pB_taskb.txt,b,non | ||
g4pB_taskc.txt,c,heavy | ||
g4pB_taskd.txt,d,light | ||
g4pB_taske.txt,e,cut | ||
g4pC_taska.txt,a,cut | ||
g4pC_taskb.txt,b,non | ||
g4pC_taskc.txt,c,non | ||
g4pC_taskd.txt,d,heavy | ||
g4pC_taske.txt,e,light | ||
g4pD_taska.txt,a,light | ||
g4pD_taskb.txt,b,cut | ||
g4pD_taskc.txt,c,non | ||
g4pD_taskd.txt,d,non | ||
g4pD_taske.txt,e,heavy | ||
g4pE_taska.txt,a,heavy | ||
g4pE_taskb.txt,b,light | ||
g4pE_taskc.txt,c,cut | ||
g4pE_taskd.txt,d,non | ||
g4pE_taske.txt,e,non | ||
orig_taska.txt,a,orig | ||
orig_taskb.txt,b,orig | ||
orig_taskc.txt,c,orig | ||
orig_taskd.txt,d,orig | ||
orig_taske.txt,e,orig |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
Inheritance is a basic concept of Object-Oriented Programming where | ||
the basic idea is to create new classes that add extra detail to | ||
existing classes. This is done by allowing the new classes to reuse | ||
the methods and variables of the existing classes and new methods and | ||
classes are added to specialise the new class. Inheritance models the | ||
“is-kind-of” relationship between entities (or objects), for example, | ||
postgraduates and undergraduates are both kinds of student. This kind | ||
of relationship can be visualised as a tree structure, where ‘student’ | ||
would be the more general root node and both ‘postgraduate’ and | ||
‘undergraduate’ would be more specialised extensions of the ‘student’ | ||
node (or the child nodes). In this relationship ‘student’ would be | ||
known as the superclass or parent class whereas, ‘postgraduate’ would | ||
be known as the subclass or child class because the ‘postgraduate’ | ||
class extends the ‘student’ class. | ||
|
||
Inheritance can occur on several layers, where if visualised would | ||
display a larger tree structure. For example, we could further extend | ||
the ‘postgraduate’ node by adding two extra extended classes to it | ||
called, ‘MSc Student’ and ‘PhD Student’ as both these types of student | ||
are kinds of postgraduate student. This would mean that both the ‘MSc | ||
Student’ and ‘PhD Student’ classes would inherit methods and variables | ||
from both the ‘postgraduate’ and ‘student classes’. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
PageRank is a link analysis algorithm used by the Google Internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. Google assigns a numeric weighting from 0-10 for each webpage on the Internet; this PageRank? denotes a site’s importance in the eyes of Google. | ||
|
||
The PageRank? is derived from a theoretical probability value on a logarithmic scale like the Richter Scale. The PageRank? of a particular page is roughly based upon the quantity of inbound links as well as the PageRank? of the pages providing the links. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is also called the PageRank? of E and denoted by PR(E). | ||
|
||
It is known that other factors, e.g. relevance of search words on the page and actual visits to the page reported by the Google toolbar also influence the PageRank?. Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com), the IBM CLEVER project, and the TrustRank? algorithm. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
The vector space model (also called, term vector model) is an algebraic model used to represent text documents, as well as any objects in general, as vectors of identifiers. It is used in information retrieval and was first used in the SMART Information Retrieval System. | ||
|
||
A document is represented as a vector and each dimension corresponds to a separate term. If a term appears in the document then its value in the vector is non-zero. Many different ways of calculating these values, also known as (term) weights, have been developed. One of the best known methods is called tf-idf weighting. | ||
|
||
The definition of term depends on the application but generally terms are single words, keywords, or longer phrases. If the words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary, which is the number of distinct words occurring in the corpus. | ||
|
||
The vector space model has several disadvantages. Firstly, long documents are represented badly because they have poor similarity values. Secondly, search keywords must accurately match document terms and substrings of words might result in a "false-positive match". Thirdly, documents with similar context but different term vocabulary will not be associated, resulting in a "false-negative match". Finally, the order in which the terms appear in the document is lost in the vector space representation. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
Bayes’ theorem was names after Rev Thomas Bayes and is a method used | ||
in probability theory. This theorem aims to relate the conditional and | ||
marginal probabilities of two random events occuring, and given | ||
various observations is frequently used to compute subsequent | ||
probabilities. Bayes’ theorem is also often known as Bayes’ law. | ||
|
||
An example of where Bayes’ theorem may be used is in the following | ||
extract: “Suppose there exists a school with forty percent females and | ||
sixty percent males as students. The female students can only wear | ||
skirts or trousers in equal numbers whereas all the male students can | ||
only wear trousers. An observer randomly sees a student from a | ||
distance and all he can see is that this student is wearing | ||
trousers. What is the probability this student is female?” | ||
|
||
There is a debate amongst frequentists and Bayesians about how Bayes’ | ||
theorem plays a major role around the beginnings of statistical | ||
mathematics. Frequentist and Bayesian explanations do not agree about | ||
the ways in which probabilities should be assigned. This is primarily | ||
because Bayesians assign probabilities in terms of beliefs whereas | ||
frequentists assign probabilities to random events according to the | ||
frequencies of them occurring. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
Dynamic Programming is an algorithm design technique used for optimisation problems, such as minimising or maximising. Like divide and conquer, Dynamic Programming solves problems by combining solutions to sub-problems. However, unlike divide and conquer, sub-problems are not always independent as sub-problems may share sub-sub-problems but solution to one sub-problem may not affect the solutions to other sub-problems of the same problem. | ||
|
||
There are four steps in Dynamic Programming: | ||
|
||
1. Characterise structure of an optimal solution. | ||
|
||
2. Define value of optimal solution recursively. | ||
|
||
3. Compute optimal solution values either top-down with caching or bottom-up in a table. | ||
|
||
4. Construct an optimal solution from computed values. | ||
|
||
An example of the type of problem for which Dynamic Programming may be used is: given two sequences, X=(x1,...,xm) and Y=(y1,...,yn) find a common subsequence whose length is maximum. | ||
|
||
Dynamic Programming reduces computation by solving sub-problems in a bottom-up fashion and by storing solution to a sub-problem the first time it is solved. Also, looking up the solution when a sub-problem is encountered again helps reduce computation. However, the key in Dynamic Programming is to determine the structure of optimal solutions. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
Inheritance is a basic concept in object oriented programming. It models the reuse of existing class code in new classes – the “is a kind of” relationship. | ||
|
||
For example, a house is a kind of building; similarly, an office block is a kind of building. Both house and office block will inherit certain characteristics from buildings, but also have their own personal characteristics – a house may have a number of occupants, whereas an office block will have a number of offices. However, these personal characteristics don't apply to all types of buildings. | ||
|
||
In this example, the building would be considered the superclass – it contains general characteristics for other objects to inherit – and the house and office block are both subclasses – they are specific types and specialise the characteristics of the superclass. | ||
|
||
Java allows object inheritance. When one class inherits from another class, all the public variables and methods are available to the subclass. | ||
|
||
public class Shape { | ||
|
||
private Color colour; | ||
|
||
public void setColour(Color newColour){ | ||
|
||
colour = newColour; | ||
|
||
} | ||
|
||
} | ||
|
||
public class Circle extends Shape { | ||
|
||
private int radius; | ||
|
||
public void setRadius(int newRadius){ | ||
|
||
radius = newRadius; | ||
|
||
} | ||
|
||
} | ||
|
||
In this example, the Circle class is a subclass of the Shape class. The Shape class provides a public setColour method, which will be available to the Circle class and other subclasses of Shape. However, the private variable colour (as defined in the Shape class) will not be available for direct manipulation by the Circle class because it is not inherited. The Circle class specialises the Shape class, which means that setRadius is available to the Circle class and all subclasses of Circle, but it isn't available to the superclass Shape. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
PageRank (PR) refers to both the concept and the Google system used | ||
for ranking the importance of pages on the web. The “PageRank” of a | ||
site refers to its importance or value on the web in relation to the | ||
rest of the sites that have been “PageRank”ed. | ||
|
||
The algorithm basically works like a popularity contest – if your site | ||
is linked to by popular websites, then your site is considered more | ||
popular. However, the PR doesn't just apply to the website as a whole | ||
– different pages within a website get given different PRs dependent | ||
on a number of factors: | ||
|
||
* Inbound links (backlinks) – how many pages (other than the ones on your website) link to this particular page | ||
|
||
* Outbound links (forward links) – how many external pages the particular page links to | ||
|
||
* Dangling links – how many pages with no external links are linked to from a particular page | ||
|
||
* Deep links – how many links that are not the home page are linked to from a particular page | ||
|
||
PR tries to emulate a “random surfer”. The algorithm includes a | ||
dampening factor, which is the probability that a random surfer will | ||
get bored and go and visit a new page - by default, this is 0.85. A | ||
variation on this is the “intentional surfer”, where the importance of | ||
a page is based on the actual visits to sites by users. This method is | ||
used in the Google Toolbar, which reports back actual site visits to | ||
Google. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
Vector space model is an algebraic model for representing text documents (and in general, any objects) as vectors of identifiers, such as, for example, index terms. Its first use was in the SMART Information Retrieval System. It is used in information filtering, information retrieval, indexing and relevancy rankings. | ||
|
||
A document is represented as a vector, and each dimension corresponds to a separate term. If a term occurs in the document, its value in the vector is non-zero. Several different ways of computing these values, also known as (term) weights, have been developed. The definition of term depends on the application. Typically terms are single words, keywords, or longer phrases. If the words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in the corpus). | ||
|
||
One of the best known schemes is tf-idf weighting, proposed by Salton, Wong and Yang. In the classic vector space model, the term specific weights in the document vectors are products of local and global parameters. | ||
|
||
Relevancy rankings of documents in a keyword search can be calculated, using the assumptions of document similarities theory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as same kind of vector as the documents. | ||
|
||
The vector space model has the following limitations: | ||
|
||
* Search keywords must precisely match document terms; word substrings might result in a "false positive match"; | ||
* Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a "false negative match"; | ||
* The order in which the terms appear in the document is lost in the vector space representation; | ||
* Long documents are poorly represented because they have poor similarity values (a small scalar product and a large dimensionality). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
Bayes' theorem relates the conditional and marginal probabilities of | ||
two random events. For example, a person may be seen to have certain | ||
medical symptoms; Bayes' theorem can then be used to compute the | ||
probability that, given that observation, the proposed diagnosis is | ||
the right one. | ||
|
||
Bayes' theorem forms a relationship between the probabilities xcof | ||
events A and B. Intuitively, Bayes' theorem in this form describes the | ||
way in which one's recognition of 'A' are updated by having observed | ||
'B'. | ||
|
||
P(A | B) = P(B | A) P(A) / P(B) | ||
|
||
P(A|B) is the conditional probability of A given B. It is derived from or depends upon the specified value of B, therefore it is also known as the posterior probability. | ||
|
||
P(B|A) is the conditional probability of B given A. | ||
|
||
P(A) is the prior probability A. It doesn't take into account any information about B, so it is "prior". | ||
|
||
P(B) is the prior or marginal probability of B, and acts to normalise the probability. | ||
|
||
To derive the theorem, we begin with the definition of conditional | ||
probability. By combining and re-arranging these two equations for A | ||
and B, we get a the lemma called product rule for | ||
probabilities. Provided that P(B) is not a zero, dividing both sides | ||
by P(B) renders us with Bayes' theorem. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
Dynamic programming is a method for solving mathematical programming problems that exhibit the properties of overlapping subproblems and optimal substructure. This is a much quicker method than other more naive methods. The word "programming" in "dynamic programming" relates optimization, which is commonly referred to as mathematical programming. Richard Bellman originally coined the term in the 1940s to describe a method for solving problems where one needs to find the best decisions one after another, and by 1953, he refined his method to the current modern meaning. | ||
|
||
Optimal substructure means that by splitting the programming into optimal solutions of subproblems, these can then be used to find the optimal solutions of the overall problem. One example is the computing of the shortest path to a goal from a vertex in a graph. First, compute the shortest path to the goal from all adjacent vertices. Then, using this, the best overall path can be found, thereby demonstrating the dynamic programming principle. This general three-step process can be used to solve a problem: | ||
|
||
1. Break up the problem different smaller subproblems. | ||
|
||
2. Recursively use this three-step process to compute the optimal path in the subproblem. | ||
|
||
3. Construct an optimal solution, using the computed optimal subproblems, for the original problem. | ||
|
||
This process continues recursively, working over the subproblems by dividing them into sub-subproblems and so forth, until a simple case is reached (one that is easily solvable). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
inheritance in object oriented programming is where a new class is formed using classes which have allready been defined. These classes have have some of the behavior and attributes which where existent in the classes that it inherited from. The peropos of inheritance in object oriented programming is to minimize the reuse of existing code without modification. | ||
|
||
Inheritance allowes classes to be categorized, similer to the way humans catagorize. It also provides a way to generalize du to the "is a" relationship between classes. For example a "cow" is a generalization of "animal" similarly so are "pigs" & cheaters". Defeining classes in this way, allows us to define attributes and behaviours which are commen to all animals in one class, so cheaters would natuarly inheart properities commen to all animals. | ||
|
||
The advantage of inheritance is that classes which would otherwise have alot of similar code , can instead shair the same code, thus reducing the complexity of the program. Inheritance, therefore, can also be refered to as polymorphism which is where many pieces of code are controled by shared control code. | ||
|
||
Inheritance can be accomplished by overriding methods in its ancestor, or by adding new methods. |
Oops, something went wrong.