Skip to content

Commit

Permalink
Typo fixes (unicode-org#3993)
Browse files Browse the repository at this point in the history
  • Loading branch information
waywardmonkeys committed Sep 4, 2023
1 parent 3ac6ffe commit 60613d0
Show file tree
Hide file tree
Showing 28 changed files with 37 additions and 37 deletions.
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Note: A subset of crates received patch releases in the 1.2 stream.
- Support configurable grouping separators in CompactDecimalFormatter (#3045)
- `icu_displaynames`: 0.8.0 -> 0.10.0
- Add ScriptDisplayNames (#3317)
- Add LangaugeDisplayNames with support for variants (#3058, #3113)
- Add LanguageDisplayNames with support for variants (#3058, #3113)
- Add stronger typing (#3190)
- `icu_harfbuzz`: New experimental port: Harfbuzz integration for ICU4X (v0.1.0)
- `icu_relativetime`: 0.1.0 -> 0.1.1
Expand Down Expand Up @@ -221,7 +221,7 @@ Note: A subset of crates received patch releases in the 1.2 stream.
* icu_segmenter: enforce `clippy::indexing_slicing`. (#2325)
* Use `GraphemeClusterSegmenter` in `DictionarySegmenter` and `LstmSegmenter` (#2716)
* Rename `*BreakSegmenter` to `*Segmenter` (#2707)
* Remove unnecessary langauge check for East Asian languagne (SA property) (#2705)
* Remove unnecessary language check for East Asian language (SA property) (#2705)
* internal and doc improvements

* `icu_timezone`
Expand Down
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ The other is the review cycle.
#### Draft Phase

If the pull request is simple and short lived, it can be initialized with review request.
If the pull request is more complex and is being developed over time, it may be benefitial to start it in a `Draft` state.
If the pull request is more complex and is being developed over time, it may be beneficial to start it in a `Draft` state.
This allows other contributors to monitor the progress and volunteer feedback while annotating that the pull request is not yet ready for review.

If a pull request is particularly large in scope and not release-ready, consider either (1) reducing the scope of the pull request, (2) moving work to the `experimental/` directory, or (3) hiding the work behind the `"experimental"` feature flag. See the section above, "Release Readiness", for more details.
Expand Down Expand Up @@ -175,10 +175,10 @@ In such cases, *mentorship model* should be used where a more senior engineer ta
When the PR author creates a new PR, they should consider three sources of reviewers and informed stakeholders:

* Owners and peers of the component they work with
* People involved in the preceeding conversation
* People involved in the preceding conversation
* Recognized experts in the domain the PR operates in

The goal of the PR author is to find the subset of stakeholders that represent those three groups well. Depending on the scope and priority of the PR, the reviewers group size can be adjusted, with small PRs being sufficent for review by just one stakeholder, and larger PRs, or first-of-a-kind using a larger pool of reviewers.
The goal of the PR author is to find the subset of stakeholders that represent those three groups well. Depending on the scope and priority of the PR, the reviewers group size can be adjusted, with small PRs being sufficient for review by just one stakeholder, and larger PRs, or first-of-a-kind using a larger pool of reviewers.

### PR author and reviewers workflow

Expand Down
2 changes: 1 addition & 1 deletion components/calendar/src/any_calendar.rs
Original file line number Diff line number Diff line change
Expand Up @@ -546,7 +546,7 @@ impl Calendar for AnyCalendar {
Self::IslamicUmmAlQura(_) => "AnyCalendar (Islamic, Umm al-Qura)",
Self::Iso(_) => "AnyCalendar (Iso)",
Self::Japanese(_) => "AnyCalendar (Japanese)",
Self::JapaneseExtended(_) => "AnyCalendar (Japanese, histocial era data)",
Self::JapaneseExtended(_) => "AnyCalendar (Japanese, historical era data)",
Self::Persian(_) => "AnyCalendar (Persian)",
Self::Roc(_) => "AnyCalendar (Roc)",
}
Expand Down
6 changes: 3 additions & 3 deletions components/calendar/src/chinese.rs
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ impl Calendar for Chinese {
/// The calendar-specific month code represented by `date`;
/// since the Chinese calendar has leap months, an "L" is appended to the month code for
/// leap months. For example, in a year where an intercalary month is added after the second
/// month, the month codes for ordinal monts 1, 2, 3, 4, 5 would be "M01", "M02", "M02L", "M03", "M04".
/// month, the month codes for ordinal months 1, 2, 3, 4, 5 would be "M01", "M02", "M02L", "M03", "M04".
fn month(&self, date: &Self::DateInner) -> types::FormattableMonth {
let ordinal = date.0 .0.month;
let leap_month_option = date.0 .1.get_leap_month();
Expand Down Expand Up @@ -449,12 +449,12 @@ impl Chinese {
Iso::iso_from_fixed(result_fixed)
}

/// Get a FormattableYear from an integer Chinese year; optionall, a `ChineseBasedYearInfo`
/// Get a FormattableYear from an integer Chinese year; optionally, a `ChineseBasedYearInfo`
/// can be passed in for faster results.
///
/// `era` is always `Era(tinystr!(16, "chinese"))`
/// `number` is the year since the inception of the Chinese calendar (see [`Chinese`])
/// `cyclic` is an option with the current year in the sexigesimal cycle (see [`Chinese`])
/// `cyclic` is an option with the current year in the sexagesimal cycle (see [`Chinese`])
/// `related_iso` is the ISO year in which the given Chinese year begins (see [`Chinese`])
fn format_chinese_year(
year: i32,
Expand Down
4 changes: 2 additions & 2 deletions components/calendar/src/islamic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -589,13 +589,13 @@ impl IslamicUmmAlQura {
let age = date.to_f64_date() - last_new_moon;
// Explanation of why the value 3.0 is chosen: https://github.com/unicode-org/icu4x/pull/3673/files#r1267460916
let tau = if age <= 3.0 && !Self::adjusted_saudi_criterion(date) {
// Checks if the criterion is not yet visibile on the evening of date
// Checks if the criterion is not yet visible on the evening of date
last_new_moon - 30.0 // Goes back a month
} else {
last_new_moon
};

next(RataDie::new(tau as i64), Self::adjusted_saudi_criterion) // Loop that increments the day and checks if the criterion is now visibile
next(RataDie::new(tau as i64), Self::adjusted_saudi_criterion) // Loop that increments the day and checks if the criterion is now visible
}

// Lisp code reference: https://github.com/EdReingold/calendar-code2/blob/main/calendar.l#L6996
Expand Down
2 changes: 1 addition & 1 deletion components/calendar/src/iso.rs
Original file line number Diff line number Diff line change
Expand Up @@ -411,7 +411,7 @@ impl Iso {

// Lisp code reference: https://github.com/EdReingold/calendar-code2/blob/1ee51ecfaae6f856b0d7de3e36e9042100b4f424/calendar.l#L1191-L1217
fn iso_year_from_fixed(date: RataDie) -> i64 {
// Shouldn't overflow because it's not possbile to construct extreme values of RataDie
// Shouldn't overflow because it's not possible to construct extreme values of RataDie
let date = date - EPOCH;

// 400 year cycles have 146097 days
Expand Down
2 changes: 1 addition & 1 deletion components/calendar/src/japanese.rs
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ impl JapaneseExtended {
}))
}

pub(crate) const DEBUG_NAME: &str = "Japanese (histocial era data)";
pub(crate) const DEBUG_NAME: &str = "Japanese (historical era data)";
}

impl Calendar for Japanese {
Expand Down
2 changes: 1 addition & 1 deletion components/calendar/src/persian.rs
Original file line number Diff line number Diff line change
Expand Up @@ -546,7 +546,7 @@ mod tests {
},
];

// Persian New Year occuring in March of Gregorian year (g_year) to fixed date
// Persian New Year occurring in March of Gregorian year (g_year) to fixed date
fn nowruz(g_year: i32) -> RataDie {
let iso_from_fixed: Date<Iso> =
Iso::iso_from_fixed(RataDie::new(FIXED_PERSIAN_EPOCH.to_i64_date()));
Expand Down
4 changes: 2 additions & 2 deletions components/calendar/src/roc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -460,7 +460,7 @@ mod test {
assert_eq!(
i.cmp(&j),
iso_i.cmp(&iso_j),
"ISO directionality inconcistent with directionality for i: {i}, j: {j}"
"ISO directionality inconsistent with directionality for i: {i}, j: {j}"
);
assert_eq!(
i.cmp(&j),
Expand All @@ -485,7 +485,7 @@ mod test {
assert_eq!(
i.cmp(&j),
iso_i.cmp(&iso_j),
"ISO directionality inconcistent with directionality for i: {i}, j: {j}"
"ISO directionality inconsistent with directionality for i: {i}, j: {j}"
);
assert_eq!(
i.cmp(&j),
Expand Down
2 changes: 1 addition & 1 deletion components/calendar/src/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,7 @@ dt_unit!(
61,
"An ISO-8601 second component, for use with ISO calendars.
Must be within inclusive bounds `[0, 61]`. `60` accomodates for leap seconds.
Must be within inclusive bounds `[0, 61]`. `60` accommodates for leap seconds.
The value could also be equal to 60 or 61, to indicate the end of a leap second,
with the writing `23:59:61.000000000Z` or `23:59:60.000000000Z`. These examples,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
//! It depends on [`CodePointInversionList`] to efficiently represent Unicode code points, while
//! it also maintains a list of strings in the set.
//!
//! It is an implementation of the the existing [ICU4C UnicodeSet API](https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/classicu_1_1UnicodeSet.html).
//! It is an implementation of the existing [ICU4C UnicodeSet API](https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/classicu_1_1UnicodeSet.html).

use crate::codepointinvlist::{
CodePointInversionList, CodePointInversionListBuilder, CodePointInversionListError,
Expand Down
2 changes: 1 addition & 1 deletion components/datetime/src/skeleton/helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ pub enum BestSkeleton<T> {
NoMatch,
}

/// This function swaps out the the time zone name field for the appropriate one. Skeleton matching
/// This function swaps out the time zone name field for the appropriate one. Skeleton matching
/// only needs to find a single "v" field, and then the time zone name can expand from there.
fn naively_apply_time_zone_name(
pattern: &mut runtime::Pattern,
Expand Down
2 changes: 1 addition & 1 deletion components/datetime/src/skeleton/reference.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ use smallvec::SmallVec;
///
/// A [`Skeleton`] is a [`Vec`]`<Field>`, but with the invariant that it is sorted according to the canonical
/// sort order. This order is sorted according to the most significant `Field` to the least significant.
/// For example, a field with a `Minute` symbol would preceed a field with a `Second` symbol.
/// For example, a field with a `Minute` symbol would precede a field with a `Second` symbol.
/// This order is documented as the order of fields as presented in the
/// [UTS 35 Date Field Symbol Table](https://unicode.org/reports/tr35/tr35-dates.html#Date_Field_Symbol_Table)
///
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/langid.rs
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ impl LanguageIdentifier {
/// Compare this `LanguageIdentifier` with a potentially unnormalized BCP-47 string.
///
/// The return value is equivalent to what would happen if you first parsed the
/// BCP-47 string to a `LanguageIdentifier` and then performed a structucal comparison.
/// BCP-47 string to a `LanguageIdentifier` and then performed a structural comparison.
///
/// # Examples
///
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/locale.rs
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ impl Locale {
/// Compare this `Locale` with a potentially unnormalized BCP-47 string.
///
/// The return value is equivalent to what would happen if you first parsed the
/// BCP-47 string to a `Locale` and then performed a structucal comparison.
/// BCP-47 string to a `Locale` and then performed a structural comparison.
///
/// # Examples
///
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/subtags/language.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
impl_tinystr_subtag!(
/// A language subtag (examples: `"en"`, `"csb"`, `"zh"`, `"und"`, etc.)
///
/// [`Language`] represents a Unicode base language code conformat to the
/// [`Language`] represents a Unicode base language code conformant to the
/// [`unicode_language_id`] field of the Language and Locale Identifier.
///
/// # Examples
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/subtags/region.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
impl_tinystr_subtag!(
/// A region subtag (examples: `"US"`, `"CN"`, `"AR"` etc.)
///
/// [`Region`] represents a Unicode base language code conformat to the
/// [`Region`] represents a Unicode base language code conformant to the
/// [`unicode_region_id`] field of the Language and Locale Identifier.
///
/// # Examples
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/subtags/script.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
impl_tinystr_subtag!(
/// A script subtag (examples: `"Latn"`, `"Arab"`, etc.)
///
/// [`Script`] represents a Unicode base language code conformat to the
/// [`Script`] represents a Unicode base language code conformant to the
/// [`unicode_script_id`] field of the Language and Locale Identifier.
///
/// # Examples
Expand Down
2 changes: 1 addition & 1 deletion components/locid/src/subtags/variant.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
impl_tinystr_subtag!(
/// A variant subtag (examples: `"macos"`, `"posix"`, `"1996"` etc.)
///
/// [`Variant`] represents a Unicode base language code conformat to the
/// [`Variant`] represents a Unicode base language code conformant to the
/// [`unicode_variant_id`] field of the Language and Locale Identifier.
///
/// # Examples
Expand Down
6 changes: 3 additions & 3 deletions components/normalizer/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1429,7 +1429,7 @@ macro_rules! decomposing_normalize_to {
} else {
return Ok(());
};
// Allowing indexed slicing, because the a failure would be a code bug and
// Allowing indexed slicing, because a failure would be a code bug and
// not a data issue.
#[allow(clippy::indexing_slicing)]
if $undecomposed_starter.starter_and_decomposes_to_self() {
Expand Down Expand Up @@ -1802,7 +1802,7 @@ impl DecomposingNormalizer {
/// that no character in Unicode exhibits in NFD, NFKD, NFC, or NFKC: Case folding turns
/// U+0345 from a reordered character into a non-reordered character before reordering happens.
/// Therefore, the output of this normalization may differ for different inputs that are
/// canonically equivant with each other if they differ by how U+0345 is ordered relative
/// canonically equivalent with each other if they differ by how U+0345 is ordered relative
/// to other reorderable characters.
///
/// Public for testing only.
Expand Down Expand Up @@ -2271,7 +2271,7 @@ impl ComposingNormalizer {
/// that no character in Unicode exhibits in NFD, NFKD, NFC, or NFKC: Case folding turns
/// U+0345 from a reordered character into a non-reordered character before reordering happens.
/// Therefore, the output of this normalization may differ for different inputs that are
/// canonically equivant with each other if they differ by how U+0345 is ordered relative
/// canonically equivalents with each other if they differ by how U+0345 is ordered relative
/// to other reorderable characters.
///
/// NOTE: This method remains experimental until suitability of this feature as part of
Expand Down
2 changes: 1 addition & 1 deletion components/plurals/src/rules/reference/resolver.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ use crate::rules::reference::ast;
///
/// let operands = PluralOperands::from(5_usize);
/// let condition =
/// parse_condition(b"i = 4..6").expect("Failde to parse a rule.");
/// parse_condition(b"i = 4..6").expect("Failed to parse a rule.");
///
/// assert!(test_condition(&condition, &operands));
/// ```
Expand Down
2 changes: 1 addition & 1 deletion docs/design/data_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The following terms are used throughout this document.
- **Key:** An identifier corresponding to a specific hunk.
- **Request Variables:** Metadata that is sent along with a key when requesting data from a data provider.
- **Response Variables:** Metadata that is sent along with a hunk when a data provider responds to a request.
- **Schema Version:** A version of the schema, tied to a hunk and abstracted away from the format version and data version. For example, data may be reorganied within the JSON file between schema versions.
- **Schema Version:** A version of the schema, tied to a hunk and abstracted away from the format version and data version. For example, data may be reorganized within the JSON file between schema versions.
- **Schema:** The structure of locale data, abstracted away from the hunk types. Data is stored in a particular format according to the schema.
- **Type:** The structure of a hunk. The type may be, for example, a number, string, list of numbers or strings, or another data type discussed in detail later in the document.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/data_safety.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Given the following goals…
1. The ICU4X core library should not panic internally.
1. Both baked and dynamically-loaded data are core features, and we seek to minimize the tradeoffs between them.

…and the the following evidence…
…and the following evidence…

1. It is rare to be 100% confident about the safety of your data.[^1]
1. Data loading and validation is known to be a performance bottleneck in prior-art libraries such as ICU4C.
Expand Down
2 changes: 1 addition & 1 deletion docs/design/properties_code_point_trie.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Building a `CodePointTrie` is expensive because several optimizations are applie
In ICU, when trie data is built for Unicode properties, it is done in a compile-time step and stored statically, which therefore does not affect runtime performance.
One example of an optimization is called compaction, in which subarrays which have identical contents can be collapsed and treated as being identical without affecting the trie lookup algorithm's result.
However, the algorithm to detect redundant blocks is inherently a pairwise comparison, and thus O(n^2).
The code to handle this and other optimziations is non-trivially complex.
The code to handle this and other optimizations is non-trivially complex.

Therefore, ICU4X implements the reader code for a trie, but it does not attempt to similarly port the ICU code for building a trie.
However, as a convenience, some code may exist in ICU4X which uses a wrapper over the WASM binary to which the ICU4C trie builder code is compiled.
2 changes: 1 addition & 1 deletion docs/process/charter.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ A viable subset of ICU4X will be targeting the [no_std] support and in the futur

The ICU4X sub-committee (ICU4X-SC), a sub-committee of the ICU technical committee (ICU-TC), will be composed of members of the Unicode Consortium. The ICU4X-SC will make architectural decisions consistent with this charter, and guide the development and maintenance of the project.

ICU4X will have an independent code base from ICU, and will operate independently of the ICU-TC. It will need no support from the the core staff of the Unicode Consortium except an occasional announcement.
ICU4X will have an independent code base from ICU, and will operate independently of the ICU-TC. It will need no support from the core staff of the Unicode Consortium except an occasional announcement.

### Is ICU4X going to replace ICU?

Expand Down
2 changes: 1 addition & 1 deletion utils/calendrical_calculations/src/astronomy.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1865,7 +1865,7 @@ impl Astronomical {
}
}

/// Aberration at the the time given in Julian centuries.
/// Aberration at the time given in Julian centuries.
/// See: https://sceweb.sce.uhcl.edu/helm/WEB-Positional%20Astronomy/Tutorial/Aberration/Aberration.html
///
/// Based on functions from _Calendrical Calculations_ by Reingold & Dershowitz.
Expand Down
2 changes: 1 addition & 1 deletion utils/litemap/src/map.rs
Original file line number Diff line number Diff line change
Expand Up @@ -663,7 +663,7 @@ where
}
}

/// Attemps to insert a unique entry into the map.
/// Attempts to insert a unique entry into the map.
///
/// If `key` is not already in the map, invokes the closure to compute `value`, inserts
/// the pair into the map, and returns a reference to the value. The closure is passed
Expand Down
2 changes: 1 addition & 1 deletion utils/zerovec/src/varzerovec/components.rs
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ impl<'a, T: VarULE + ?Sized, F: VarZeroVecFormat> VarZeroVecComponents<'a, T, F>
/// Construct a new VarZeroVecComponents, checking invariants about the overall buffer size:
///
/// - There must be either zero or at least four bytes (if four, this is the "length" parsed as a usize)
/// - There must be at least `4*length + 4` bytes total, to form the the array `indices` of indices
/// - There must be at least `4*length + 4` bytes total, to form the array `indices` of indices
/// - `indices[i]..indices[i+1]` must index into a valid section of
/// `things`, such that it parses to a `T::VarULE`
/// - `indices[len - 1]..things.len()` must index into a valid section of
Expand Down

0 comments on commit 60613d0

Please sign in to comment.