Please help write the functions (context for the problem in the images): write_X_to_map(filename, row, col): Takes as inputs a string corresponding to a filename, and two non-negative integers representing a row and column index. Reads in the map at the given filename, inserts an X into the given row and column position, then saves the map to a new file with 'new_' prepended to the given filename. You can assume the filename given to the function as argument refers to a file that exists.

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question

Please help write the functions (context for the problem in the images):

write_X_to_map(filename, row, col): Takes as inputs a string corresponding to a filename, and two non-negative integers representing a row and column index. Reads in the map at the given filename, inserts an X into the given row and column position, then saves the map to a new file with 'new_' prepended to the given filename. You can assume the filename given to the function as argument refers to a file that exists.

such deceptions, we must also acquire the knowiedge benind this type of algorithnm.
Background
In the 1948 landmark paper 'A Mathematical Theory of Communication', Claude Shannon founded the
field of information theory and revolutionized the telecommunications industry, laying the groundwork
for today's Information Age. In the paper, Shannon proposed using a Markov chain to create a statistical
model of the sequences of letters in a piece of English text. Markov chains are now widely used in speech
recognition, handwriting recognition, information retrieval, data compression, and spam filtering. They
also have many scientific computing applications including the genemark algorithm for gene prediction,
the Metropolis algorithm for measuring thermodynamical properties, and Google's PageRank algorithm
for Web search. For this assignment question, we consider a variant of Markov chains to generate stylized
pseudo-random text.
Shannon approximated the statistical structure of a piece of text using a simple mathematical model
known as a Markov model. A Markov model of order 0 predicts that each letter in the alphabet will
occur with a fixed probability. For instance, it might predict that each letter occurs % of the time,
that is, entirely at random. Or, we might base its prediction on a particular piece of text, counting the
number of occurrences of each letter in that text, and using those ratios as our probabilities.
Page 11
For example, if the input text is 'gagggagaggcgagaaa', the Markov model of order 0 predicts that, in
future, 'a' will occur with probability 7/17, 'c' will occur with probability 1/17, and 'g' will occur
with probability 9/17, because these are the fractions of times each letter occurs in the input text.
If we were to then use these predictions in order to generate a new piece of text, we might obtain the
following:
g agg cg ag a ag aga aga a a gag agaga a ag ag a ag ...
Note how, in this generated piece of text, there are very few c's (since we predict they will occur with
probability 1/17), whereas there are many more a's and g's.
A Markov model of order 0 assumes that each letter is chosen independently. That is, each letter occurs
with the given probabilities, no matter what letter came before it. This independence makes things
simple but does not translate to real language. In English, for example, there is a very high correlation
among successive characters in a word or sentence. For example, 'w' is more likely to be followed with
'e' than with 'u', while 'q' is more likely to be followed with 'u' than with 'e'.
We obtain a more refined model by allowing the probability of choosing each successive letter to depend
on the preceding letter or letters. A Markov model of order k predicts that each letter occurs with
a fixed probability, but that probability can depend on the previous k consecutive characters. Let a
k-gram mean any string of k characters. Then for example, if the text has 100 occurrences of 'th', with
60 occurrences of 'the', 25 occurrences of 'thi', 10 occurrences of 'tha', and 5 occurrences of 'tho',
the Markov model of order 2 predicts that the next letter following the 2-gram 'th' will be 'e' with
probability 3/5, 'i' with probability 1/4, 'a' with probability 1/10, and 'o' with probability 1/20.
Once we have such a model, we can then use it to generate text.
some particular k characters, and then ask it what it predicts will come next. We can repeat asking for
its predictions until we have a large corpus of generated text. The generated text will, by definition,
resemble the text that was used to create the model. We can base our model on any kind of text (fiction,
poetry, news articles, song lyrics, plays, etc.), and the text generated from that model will have similar
That is, we can start it off with
characteristics.
Transcribed Image Text:such deceptions, we must also acquire the knowiedge benind this type of algorithnm. Background In the 1948 landmark paper 'A Mathematical Theory of Communication', Claude Shannon founded the field of information theory and revolutionized the telecommunications industry, laying the groundwork for today's Information Age. In the paper, Shannon proposed using a Markov chain to create a statistical model of the sequences of letters in a piece of English text. Markov chains are now widely used in speech recognition, handwriting recognition, information retrieval, data compression, and spam filtering. They also have many scientific computing applications including the genemark algorithm for gene prediction, the Metropolis algorithm for measuring thermodynamical properties, and Google's PageRank algorithm for Web search. For this assignment question, we consider a variant of Markov chains to generate stylized pseudo-random text. Shannon approximated the statistical structure of a piece of text using a simple mathematical model known as a Markov model. A Markov model of order 0 predicts that each letter in the alphabet will occur with a fixed probability. For instance, it might predict that each letter occurs % of the time, that is, entirely at random. Or, we might base its prediction on a particular piece of text, counting the number of occurrences of each letter in that text, and using those ratios as our probabilities. Page 11 For example, if the input text is 'gagggagaggcgagaaa', the Markov model of order 0 predicts that, in future, 'a' will occur with probability 7/17, 'c' will occur with probability 1/17, and 'g' will occur with probability 9/17, because these are the fractions of times each letter occurs in the input text. If we were to then use these predictions in order to generate a new piece of text, we might obtain the following: g agg cg ag a ag aga aga a a gag agaga a ag ag a ag ... Note how, in this generated piece of text, there are very few c's (since we predict they will occur with probability 1/17), whereas there are many more a's and g's. A Markov model of order 0 assumes that each letter is chosen independently. That is, each letter occurs with the given probabilities, no matter what letter came before it. This independence makes things simple but does not translate to real language. In English, for example, there is a very high correlation among successive characters in a word or sentence. For example, 'w' is more likely to be followed with 'e' than with 'u', while 'q' is more likely to be followed with 'u' than with 'e'. We obtain a more refined model by allowing the probability of choosing each successive letter to depend on the preceding letter or letters. A Markov model of order k predicts that each letter occurs with a fixed probability, but that probability can depend on the previous k consecutive characters. Let a k-gram mean any string of k characters. Then for example, if the text has 100 occurrences of 'th', with 60 occurrences of 'the', 25 occurrences of 'thi', 10 occurrences of 'tha', and 5 occurrences of 'tho', the Markov model of order 2 predicts that the next letter following the 2-gram 'th' will be 'e' with probability 3/5, 'i' with probability 1/4, 'a' with probability 1/10, and 'o' with probability 1/20. Once we have such a model, we can then use it to generate text. some particular k characters, and then ask it what it predicts will come next. We can repeat asking for its predictions until we have a large corpus of generated text. The generated text will, by definition, resemble the text that was used to create the model. We can base our model on any kind of text (fiction, poetry, news articles, song lyrics, plays, etc.), and the text generated from that model will have similar That is, we can start it off with characteristics.
such deceptions, we must also acquire the knowiedge benind this type of algorithnm.
Background
In the 1948 landmark paper 'A Mathematical Theory of Communication', Claude Shannon founded the
field of information theory and revolutionized the telecommunications industry, laying the groundwork
for today's Information Age. In the paper, Shannon proposed using a Markov chain to create a statistical
model of the sequences of letters in a piece of English text. Markov chains are now widely used in speech
recognition, handwriting recognition, information retrieval, data compression, and spam filtering. They
also have many scientific computing applications including the genemark algorithm for gene prediction,
the Metropolis algorithm for measuring thermodynamical properties, and Google's PageRank algorithm
for Web search. For this assignment question, we consider a variant of Markov chains to generate stylized
pseudo-random text.
Shannon approximated the statistical structure of a piece of text using a simple mathematical model
known as a Markov model. A Markov model of order 0 predicts that each letter in the alphabet will
occur with a fixed probability. For instance, it might predict that each letter occurs % of the time,
that is, entirely at random. Or, we might base its prediction on a particular piece of text, counting the
number of occurrences of each letter in that text, and using those ratios as our probabilities.
Page 11
For example, if the input text is 'gagggagaggcgagaaa', the Markov model of order 0 predicts that, in
future, 'a' will occur with probability 7/17, 'c' will occur with probability 1/17, and 'g' will occur
with probability 9/17, because these are the fractions of times each letter occurs in the input text.
If we were to then use these predictions in order to generate a new piece of text, we might obtain the
following:
g agg cg ag a ag aga aga a a gag agaga a ag ag a ag ...
Note how, in this generated piece of text, there are very few c's (since we predict they will occur with
probability 1/17), whereas there are many more a's and g's.
A Markov model of order 0 assumes that each letter is chosen independently. That is, each letter occurs
with the given probabilities, no matter what letter came before it. This independence makes things
simple but does not translate to real language. In English, for example, there is a very high correlation
among successive characters in a word or sentence. For example, 'w' is more likely to be followed with
'e' than with 'u', while 'q' is more likely to be followed with 'u' than with 'e'.
We obtain a more refined model by allowing the probability of choosing each successive letter to depend
on the preceding letter or letters. A Markov model of order k predicts that each letter occurs with
a fixed probability, but that probability can depend on the previous k consecutive characters. Let a
k-gram mean any string of k characters. Then for example, if the text has 100 occurrences of 'th', with
60 occurrences of 'the', 25 occurrences of 'thi', 10 occurrences of 'tha', and 5 occurrences of 'tho',
the Markov model of order 2 predicts that the next letter following the 2-gram 'th' will be 'e' with
probability 3/5, 'i' with probability 1/4, 'a' with probability 1/10, and 'o' with probability 1/20.
Once we have such a model, we can then use it to generate text.
some particular k characters, and then ask it what it predicts will come next. We can repeat asking for
its predictions until we have a large corpus of generated text. The generated text will, by definition,
resemble the text that was used to create the model. We can base our model on any kind of text (fiction,
poetry, news articles, song lyrics, plays, etc.), and the text generated from that model will have similar
That is, we can start it off with
characteristics.
Transcribed Image Text:such deceptions, we must also acquire the knowiedge benind this type of algorithnm. Background In the 1948 landmark paper 'A Mathematical Theory of Communication', Claude Shannon founded the field of information theory and revolutionized the telecommunications industry, laying the groundwork for today's Information Age. In the paper, Shannon proposed using a Markov chain to create a statistical model of the sequences of letters in a piece of English text. Markov chains are now widely used in speech recognition, handwriting recognition, information retrieval, data compression, and spam filtering. They also have many scientific computing applications including the genemark algorithm for gene prediction, the Metropolis algorithm for measuring thermodynamical properties, and Google's PageRank algorithm for Web search. For this assignment question, we consider a variant of Markov chains to generate stylized pseudo-random text. Shannon approximated the statistical structure of a piece of text using a simple mathematical model known as a Markov model. A Markov model of order 0 predicts that each letter in the alphabet will occur with a fixed probability. For instance, it might predict that each letter occurs % of the time, that is, entirely at random. Or, we might base its prediction on a particular piece of text, counting the number of occurrences of each letter in that text, and using those ratios as our probabilities. Page 11 For example, if the input text is 'gagggagaggcgagaaa', the Markov model of order 0 predicts that, in future, 'a' will occur with probability 7/17, 'c' will occur with probability 1/17, and 'g' will occur with probability 9/17, because these are the fractions of times each letter occurs in the input text. If we were to then use these predictions in order to generate a new piece of text, we might obtain the following: g agg cg ag a ag aga aga a a gag agaga a ag ag a ag ... Note how, in this generated piece of text, there are very few c's (since we predict they will occur with probability 1/17), whereas there are many more a's and g's. A Markov model of order 0 assumes that each letter is chosen independently. That is, each letter occurs with the given probabilities, no matter what letter came before it. This independence makes things simple but does not translate to real language. In English, for example, there is a very high correlation among successive characters in a word or sentence. For example, 'w' is more likely to be followed with 'e' than with 'u', while 'q' is more likely to be followed with 'u' than with 'e'. We obtain a more refined model by allowing the probability of choosing each successive letter to depend on the preceding letter or letters. A Markov model of order k predicts that each letter occurs with a fixed probability, but that probability can depend on the previous k consecutive characters. Let a k-gram mean any string of k characters. Then for example, if the text has 100 occurrences of 'th', with 60 occurrences of 'the', 25 occurrences of 'thi', 10 occurrences of 'tha', and 5 occurrences of 'tho', the Markov model of order 2 predicts that the next letter following the 2-gram 'th' will be 'e' with probability 3/5, 'i' with probability 1/4, 'a' with probability 1/10, and 'o' with probability 1/20. Once we have such a model, we can then use it to generate text. some particular k characters, and then ask it what it predicts will come next. We can repeat asking for its predictions until we have a large corpus of generated text. The generated text will, by definition, resemble the text that was used to create the model. We can base our model on any kind of text (fiction, poetry, news articles, song lyrics, plays, etc.), and the text generated from that model will have similar That is, we can start it off with characteristics.
Expert Solution
steps

Step by step

Solved in 3 steps with 2 images

Blurred answer
Knowledge Booster
File Input and Output Operations
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education