Search results for: Kazakh speech dataset
1276 Constructing a Semi-Supervised Model for Network Intrusion Detection
Authors: Tigabu Dagne Akal
Abstract:
While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.Keywords: intrusion detection, data mining, computer science, data mining
Procedia PDF Downloads 2961275 Saudi Twitter Corpus for Sentiment Analysis
Authors: Adel Assiri, Ahmed Emam, Hmood Al-Dossari
Abstract:
Sentiment analysis (SA) has received growing attention in Arabic language research. However, few studies have yet to directly apply SA to Arabic due to lack of a publicly available dataset for this language. This paper partially bridges this gap due to its focus on one of the Arabic dialects which is the Saudi dialect. This paper presents annotated data set of 4700 for Saudi dialect sentiment analysis with (K= 0.807). Our next work is to extend this corpus and creation a large-scale lexicon for Saudi dialect from the corpus.Keywords: Arabic, sentiment analysis, Twitter, annotation
Procedia PDF Downloads 6301274 Engendered Noises: The Gender Politics of Sensorial Pleasure in Neoliberal Korean Food Commercials
Authors: Eunyup Yeom
Abstract:
The roles of male and female in context of cuisine have developed into stereotypes throughout history. However¬— with Korea’s fast advancement in politics, technology, society and social standards¬— gender stereotypes have become blurred. This is not to say that such stereotypes no longer exist for they still remain present in media and advertisements embedding ‘idealistic’ ideas into the unconscious state of minds of viewers. Many media outlets, especially commercials, portray males expressing pleasure of food [that they are advertising] through audible qualities generally considered ‘rude’ and ‘unmannered’ in the Korean society. Females, on the other hand, express such pleasures only verbally. This happenstance of a stereotype is displayed bluntly in instant noodle, namely ramen, commercials. This research explores the cultural significance of a type of audible gesture that can be found in Korean speech in which is termed the Fricative Voice Gesture (FVG). There are two forms of FVGs: the reactive and the prosodic. The reactive FVG is a legitimate form of expression while the prosodic FVG works as a speech intensifier. So, in order to understand this stereotype of who is authorized to express sensorial pleasure as a reactive FVG as opposed to a prosodic FVG, information has been extracted from interviews and dissected numerous ramen/instant noodle commercials and its appearances in other mediums of media. The commercials were tediously analyzed in all aspects of dialogue, featured contents, background music, actors and/or actresses selling the product, body language, and voice gestures. To effectively understand the exact impact these commercials have on the audience, each commercial was viewed with an interviewee. In this research, there were main informants whom were all Korean students residing in South Korea. All three interviewees were able to attend interview and commercial viewing sessions via Skype. This research, overall, focuses and concludes on Harkness’s statement of how the reactive FVG is a recognizable index of the privileging of males for Korean culture norms and, in parallel, food commercials are still conforming to male ideals and fantasies.Keywords: advertisement, food politics, fricative voice gestures, gender politics
Procedia PDF Downloads 2261273 BERT-Based Chinese Coreference Resolution
Authors: Li Xiaoge, Wang Chaodong
Abstract:
We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.Keywords: BERT, coreference resolution, deep learning, nature language processing
Procedia PDF Downloads 2161272 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 1531271 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 101270 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights
Authors: Wendy de Gomez
Abstract:
Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.Keywords: patents, artifical intelligence, standards, F/Rand agreements
Procedia PDF Downloads 881269 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 1481268 Business Intelligence Dashboard Solutions for Improving Decision Making Process: A Focus on Prostate Cancer
Authors: Mona Isazad Mashinchi, Davood Roshan Sangachin, Francis J. Sullivan, Dietrich Rebholz-Schuhmann
Abstract:
Background: Decision-making processes are nowadays driven by data, data analytics and Business Intelligence (BI). BI as a software platform can provide a wide variety of capabilities such as organization memory, information integration, insight creation and presentation capabilities. Visualizing data through dashboards is one of the BI solutions (for a variety of areas) which helps managers in the decision making processes to expose the most informative information at a glance. In the healthcare domain to date, dashboard presentations are more frequently used to track performance related metrics and less frequently used to monitor those quality parameters which relate directly to patient outcomes. Providing effective and timely care for patients and improving the health outcome are highly dependent on presenting and visualizing data and information. Objective: In this research, the focus is on the presentation capabilities of BI to design a dashboard for prostate cancer (PC) data that allows better decision making for the patients, the hospital and the healthcare system related to a cancer dataset. The aim of this research is to customize a retrospective PC dataset in a dashboard interface to give a better understanding of data in the categories (risk factors, treatment approaches, disease control and side effects) which matter most to patients as well as other stakeholders. By presenting the outcome in the dashboard we address one of the major targets of a value-based health care (VBHC) delivery model which is measuring the value and presenting the outcome to different actors in HC industry (such as patients and doctors) for a better decision making. Method: For visualizing the stored data to users, three interactive dashboards based on the PC dataset have been developed (using the Tableau Software) to provide better views to the risk factors, treatment approaches, and side effects. Results: Many benefits derived from interactive graphs and tables in dashboards which helped to easily visualize and see the patients at risk, better understanding the relationship between patient's status after treatment and their initial status before treatment, or to choose better decision about treatments with fewer side effects regarding patient status and etc. Conclusions: Building a well-designed and informative dashboard is related to three important factors including; the users, goals and the data types. Dashboard's hierarchies, drilling, and graphical features can guide doctors to better navigate through information. The features of the interactive PC dashboard not only let doctors ask specific questions and filter the results based on the key performance indicators (KPI) such as: Gleason Grade, Patient's Age and Status, but may also help patients to better understand different treatment outcomes, such as side effects during the time, and have an active role in their treatment decisions. Currently, we are extending the results to the real-time interactive dashboard that users (either patients and doctors) can easily explore the data by choosing preferred attribute and data to make better near real-time decisions.Keywords: business intelligence, dashboard, decision making, healthcare, prostate cancer, value-based healthcare
Procedia PDF Downloads 1411267 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 71266 Role of Gender in Apparel Stores' Consumer Review: A Sentiment Analysis
Authors: Sarif Ullah Patwary, Matthew Heinrich, Brandon Payne
Abstract:
The ubiquity of web 2.0 platforms, in the form of wikis, social media (e.g., Facebook, Twitter, etc.) and online review portals (e.g., Yelp), helps shape today’s apparel consumers’ purchasing decision. Online reviews play important role towards consumers’ apparel purchase decision. Each of the consumer reviews carries a sentiment (positive, negative or neutral) towards products. Commercially, apparel brands and retailers analyze sentiment of this massive amount of consumer review data to update their inventory and bring new products in the market. The purpose of this study is to analyze consumer reviews of selected apparel stores with a view to understand, 1) the difference of sentiment expressed through men’s and woman’s text reviews, 2) the difference of sentiment expressed through men’s and woman’s star-based reviews, and 3) the difference of sentiment between star-based reviews and text-based reviews. A total of 9,363 reviews (1,713 men and 7,650 women) were collected using Yelp Dataset Challenge. Sentiment analysis of collected reviews was carried out in two dimensions: star-based reviews and text-based reviews. Sentiment towards apparel stores expressed through star-based reviews was deemed: 1) positive for 3 or 4 stars 2) negative for 1 or 2 stars and 3) neutral for 3 stars. Sentiment analysis of text-based reviews was carried out using Bing Liu dictionary. The analysis was conducted in IPyhton 5.0. Space. The sentiment analysis results revealed the percentage of positive text reviews by men (80%) and women (80%) were identical. Women reviewers (12%) provided more neutral (e.g., 3 out of 5 stars) star reviews than men (6%). Star-based reviews were more negative than the text-based reviews. In other words, while 80% men and women wrote positive reviews for the stores, less than 70% ended up giving 4 or 5 stars in those reviews. One of the key takeaways of the study is that star reviews provide slightly negative sentiment of the consumer reviews. Therefore, in order to understand sentiment towards apparel products, one might need to combine both star and text aspects of consumer reviews. This study used a specific dataset consisting of selected apparel stores from particular geographical locations (the information was not given for privacy concern). Future studies need to include more data from more stores and locations to generalize the findings of the study.Keywords: apparel, consumer review, sentiment analysis, gender
Procedia PDF Downloads 1641265 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria
Authors: O. O. Aiyelokun, O. A. Agbede
Abstract:
Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.Keywords: boundary condition, goodness of fit, groundwater, satellite-based data
Procedia PDF Downloads 1301264 A Consideration of Dialectal and Stylistic Shifts in Literary Translation
Authors: Pushpinder Syal
Abstract:
Literary writing carries the stamp of the current language of its time. In translating such texts, it becomes a challenge to capture such reflections which may be evident at several levels: the level of dialectal use of language by characters in stories, the alterations in syntax as tools of writers’ individual stylistic choices, the insertion of quasi-proverbial and gnomic utterances, and even the level of the pragmatics of narrative discourse. Discourse strategies may differ between earlier and later texts, reflecting changing relationships between narrators and readers in changed cultural and social contexts. This paper is a consideration of these features by an approach that combines historicity with a description, contextualizing language change within a discourse framework. The process of translating a collection of writings of Punjabi literature spanning 100 years was undertaken for this study and it was observed that the factor of the historicity of language was seen to play a role. While intended for contemporary readers, the translation of literature over the span of a century poses the dual challenge of needing to possess both accessibility and immediacy as well as adherence to the 'old world' styles of communicating and narrating. The linguistic changes may be observed in a more obvious sense in the difference of diction and word formation – with evidence of more hybridized and borrowed forms in modern and contemporary writings, as compared to the older writings. The latter not only contain vestiges of proverbs and folk sayings, but are also closer to oral speech styles. These will be presented and analysed in the form of chronological listing and by these means, the social process of translation from orality to written text can be seen as traceable in the above-mentioned works. More subtle and underlying shifts can be seen through the analysis of speech acts and implicatures in the same literature, in which the social relationships underlying language use are evident as discourse systems of belief and understanding. They present distinct shifts in worldview as seen at different points in time. However, some continuities of language and style are also clearly visible, and these aid the translator in putting together a set of thematic links which identify the literature of a region and community, and constitute essential outcomes in the effort to preserve its distinctive nature.Keywords: cultural change, dialect, historicity, stylistic variation
Procedia PDF Downloads 1301263 [Keynote Speech]: Competitive Evaluation of Power Plants in Energy Policy
Authors: Beril Tuğrul
Abstract:
Electrical energy is the most important form of energy and electrical power plants have highest impact factor in energy policy. This study is in relation with evaluation of various power plants including fossil fuels, nuclear and renewable energy based power plants. The power plants evaluated with regard to their overall impact that considered for establishing of the plants. Both positive and negative impacts of power plant operation are compared view of different arguments. Then calculate the impact factor by using variation linear extrapolation for each argument. With this study, power plants assessed with the different point of view and clarified objectively. Procedia PDF Downloads 5241262 Subject Teachers’ Perception of the Changing Role of Language in the Curriculum of Secondary Education
Authors: Moldir Makenova
Abstract:
Alongside the implementation of trilingual education in schools, the Ministry of Education and Science of the Republic of Kazakhstan innovated the school curriculum in 2013 to include a Content and Language Integrated Learning (CLIL) approach. In this regard, some transition issues have arisen, such as unprepared teachers, a need for more awareness of the CLIL approach, and teaching resources. Some teachers view it as a challenge due to its combination of both content and language. This often creates anxiety among teachers who are knowledgeable about their subject areas in Kazakh or Russian but are deficient in delivering the subject’s content in English. Thus, with this new teaching approach, teachers encounter to choose the role of language and answer how language works in the CLIL classroom. This study aimed to explore how teachers experience the changing role of language in the curriculum and to find out what challenges teachers face related to CLIL implementation and how their language proficiency influences their teaching practices. A qualitative comparative case study was conducted in an X Lyceum and a mainstream school piloting CLIL. Data collection procedures were conducted via semi-structured interviews, classroom observations, and document analysis. Eight content teachers were chosen from these two schools as the target group of this study. Subject teachers, rather than language teachers, were chosen as the target group to grasp how the language-related issues in the new curriculum are interpreted by educators who do not necessarily identify themselves as language experts at the outset. The findings showed that mainstream teachers prioritize content over language because, as content teachers, the knowledge of content is more essential for them rather than the language. In contrast, most X Lyceum teachers balance language and content and additionally showed their preferences to support the ‘English language only' policy among 10-11 graders. Moreover, due to the low-level English proficiency, mainstream teachers did highlight the necessity of CLIL training and further collaboration with language teachers. This study will be beneficial for teachers and policy-makers to enable them to solve the issues mentioned above related to the implementation of CLIL. Larger-scale research conducted in the future would further inform its successful deployment country-wide.Keywords: role of language, trilingual education, updated curriculum, teacher practices
Procedia PDF Downloads 711261 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 481260 [Keynote Speech]: Conceptual Design of a Short Take-Off and Landing (STOL) Light Sport Aircraft
Authors: Zamri Omar, Alifi Zainal Abidin
Abstract:
Although flying machines have made their tremendous technological advancement since the first successfully flight of the heavier-than-air aircraft, its benefits to the greater community are still belittled. One of the reasons for this drawback is due to the relatively high cost needed to fly on the typical light aircraft. A smaller and lighter plane, widely known as Light Sport Aircraft (LSA) has the potential to attract more people to actively participate in numerous flying activities, such as for recreational, business trips or other personal purposes. In this paper, we propose a new LSA design with some simple, yet important analysis required in the aircraft conceptual design stage.Keywords: light sport aircraft, conceptual design, aircraft layout, aircraft
Procedia PDF Downloads 3461259 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 771258 Network Traffic Classification Scheme for Internet Network Based on Application Categorization for Ipv6
Authors: Yaser Miaji, Mohammed Aloryani
Abstract:
The rise of recent applications in everyday implementation like videoconferencing, online recreation and voice speech communication leads to pressing the need for novel mechanism and policy to serve this steep improvement within the application itself and users‟ wants. This diversity in web traffics needs some classification and prioritization of the traffics since some traffics merit abundant attention with less delay and loss, than others. This research is intended to reinforce the mechanism by analysing the performance in application according to the proposed mechanism implemented. The mechanism used is quite direct and analytical. The mechanism is implemented by modifying the queue limit in the algorithm.Keywords: traffic classification, IPv6, internet, application categorization
Procedia PDF Downloads 5651257 Evaluation of the Impact of Functional Communication Training on Behaviors of Concern for Students at a Non-Maintained Special School
Authors: Kate Duggan
Abstract:
Introduction: Functional Communication Training (FCT) is an approach which aims to reduce behaviours of concern by teaching more effective ways to communicate. It requires identification of the function of the behaviour of concern, through gathering information from key stakeholders and completing observations of the individual’s behaviour including antecedents to, and consequences of the behaviour. Appropriate communicative alternatives are then identified and taught to the individual using systematic instruction techniques. Behaviours of concern demonstrated by individuals with autism spectrum conditions (ASC) frequently have a communication function. When contributing to positive behavior support plans, speech and language therapists and other professionals working with individuals with ASC need to identify alternative communicative behaviours which are equally reinforcing as the existing behaviours of concern. Successful implementation of FCT is dependent on an effective ‘response match’. The new way of communicating must be equally as effective as the behaviour previously used and require the same amount or less effort from the individual. It must also be understood by the communication partners the individual encounters and be appropriate to their communicative contexts. Method: Four case studies within a non-maintained special school environment were described and analysed. A response match framework was used to identify the effectiveness of functional communication training delivered by the student’s speech and language therapist, teacher and learning support assistants. The success of systematic instruction techniques used to develop new communicative behaviours was evaluated using the CODES framework. Findings: Functional communication training can be used as part of a positive behaviour support approach for students within this setting. All case studies reviewed demonstrated ‘response success’, in that the desired response was gained from the new communicative behaviour. Barriers to the successful embedding of new communicative behaviours were encountered. In some instances, the new communicative behaviour could not be consistently understood across all communication partners which reduced ‘response recognisability’. There was also evidence of increased physical or cognitive difficulty in employing the new communicative behaviour which reduced the ‘response effectivity’. Successful use of ‘thinning schedules of reinforcement’, taught students to tolerate a delay to reinforcement once the new communication behaviour was learned.Keywords: augmentative and alternative communication, autism spectrum conditions, behaviours of concern, functional communication training
Procedia PDF Downloads 1171256 Developing Primary Care Datasets for a National Asthma Audit
Authors: Rachael Andrews, Viktoria McMillan, Shuaib Nasser, Christopher M. Roberts
Abstract:
Background and objective: The National Review of Asthma Deaths (NRAD) found that asthma management and care was inadequate in 26% of cases reviewed. Major shortfalls identified were adherence to national guidelines and standards and, particularly, the organisation of care, including supervision and monitoring in primary care, with 70% of cases reviewed having at least one avoidable factor in this area. 5.4 million people in the UK are diagnosed with and actively treated for asthma, and approximately 60,000 are admitted to hospital with acute exacerbations each year. The majority of people with asthma receive management and treatment solely in primary care. This has therefore created concern that many people within the UK are receiving sub-optimal asthma care resulting in unnecessary morbidity and risk of adverse outcome. NRAD concluded that a national asthma audit programme should be established to measure and improve processes, organisation, and outcomes of asthma care. Objective: To develop a primary care dataset enabling extraction of information from GP practices in Wales and providing robust data by which results and lessons could be drawn and drive service development and improvement. Methods: A multidisciplinary group of experts, including general practitioners, primary care organisation representatives, and asthma patients was formed and used as a source of governance and guidance. A review of asthma literature, guidance, and standards took place and was used to identify areas of asthma care which, if improved, would lead to better patient outcomes. Modified Delphi methodology was used to gain consensus from the expert group on which of the areas identified were to be prioritised, and an asthma patient and carer focus group held to seek views and feedback on areas of asthma care that were important to them. Areas of asthma care identified by both groups were mapped to asthma guidelines and standards to inform and develop primary and secondary care datasets covering both adult and pediatric care. Dataset development consisted of expert review and a targeted consultation process in order to seek broad stakeholder views and feedback. Results: Areas of asthma care identified as requiring prioritisation by the National Asthma Audit were: (i) Prescribing, (ii) Asthma diagnosis (iii) Asthma Reviews (iv) Personalised Asthma Action Plans (PAAPs) (v) Primary care follow-up after discharge from hospital (vi) Methodologies and primary care queries were developed to cover each of the areas of poor and variable asthma care identified and the queries designed to extract information directly from electronic patients’ records. Conclusion: This paper describes the methodological approach followed to develop primary care datasets for a National Asthma Audit. It sets out the principles behind the establishment of a National Asthma Audit programme in response to a national asthma mortality review and describes the development activities undertaken. Key process elements included: (i) mapping identified areas of poor and variable asthma care to national guidelines and standards, (ii) early engagement of experts, including clinicians and patients in the process, and (iii) targeted consultation of the queries to provide further insight into measures that were collectable, reproducible and relevant.Keywords: asthma, primary care, general practice, dataset development
Procedia PDF Downloads 1751255 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population
Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya
Abstract:
Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa
Procedia PDF Downloads 1061254 Exploring Family and Preschool Early Interactive Literacy Practices in Jordan
Authors: Rana Alkhamra
Abstract:
Background: Child's earliest experiences with books and stories during the first years of his life are strongly linked with the development of his early language and literacy skills. Interacting in routine learning activities, such as shared book reading, storytelling, and teaching about the letters of the alphabet make a critical foundation for early learning, language growth and emergent literacy. Aim: The current study explores family and preschool early interactive literacy practices in families and preschools (nursery and kindergarten) in Jordan. It highlights the importance of early interactive literacy activities on child language and literacy growth and development. Methods: This is a cross sectional study that surveyed 243 Jordanian families. The survey investigated literacy routine practices, largely shared books reading, at home and at preschool; child speech and language development; and family demographics. Results: Around 92.5% of the families read books and stories to their children, as frequently as 1-2 times weekly or monthly (75%). Only 19.6% read books on daily basis. Many families reported preferring story-telling (97%). Despite that families acknowledged the importance of early literacy activities, on language, reading and writing, cognitive, and academic development, 45% asked for education and training pertaining to specific ways and ideas to help their young children develop language and literacy skills. About 69% of the families reported reading books and stories to their children for 15 minutes a day, while 71.2% indicated having their children watch television for 3 to > 6 hours a day. At preschool, only 52.8% of the teachers were reported to read books and stories. Factors like parent education, monthly income, living inside (33.6%) or outside (66.4%) the capital city of Amman significantly (p < 0.05) affected child early literacy interactive activities whether at home or at preschool. Conclusion: Early language and literacy skills depend largely on the opportunities and experiences provided to children in the home and in preschool environment. Family literacy programs can play an important role in bridging the gap in early literacy experiences for families that need help. Also, speech therapists can work in collaboration with families and educators to ensure that young children have high quality and sufficient opportunities to participate in early literacy activities both at home and in preschool environments.Keywords: literacy, interactive activities, language, practices, family, preschool, Jordan
Procedia PDF Downloads 4491253 PhD Students’ Challenges with Impact-Factor in Kazakhstan
Authors: Duishon Shamatov
Abstract:
This presentation is about Kazakhstan’s PhD students’ experiences with impact-factor publication requirement. Since the break-up of the USSR, Kazakhstan has been attempting to improve its higher education system at undergraduate and graduate levels. From March, 2010 Kazakhstan joined Bologna process and entered European space of higher education. To align with the European system of higher education, three level of preparation of specialists (undergraduate, master and PhD) was adopted to replace the Soviet system. The changes were aimed at promoting high quality higher education that meets the demands of labor market and growing needs of the industrial-innovative development of the country, and meeting the international standards. The shift to the European system has brought many benefits, but there are also some serious challenges. One of those challenges is related to the requirements for the PhD candidates to publish in national and international journals. Thus, a PhD candidate should have 7 publications in total, out of which one has to be in an international impact factor journal. A qualitative research was conducted to explore the PhD students’ views of their experiences with impact-factor publications. With the help of purposeful sampling, 30 PhD students from seven universities across Kazakhstan were selected for individual and focus group interviews. The key findings of the study are as follows. While the Kazakh PhD students have no difficulties in publishing in local journals, they face great challenges in attempting to publish in impact-factor journals for a range of reasons. They include but not limited to lack of research and publication skills, poorer knowledge of academic English, not familiarity with the peer review publication processes and expectations, and very short time to get published due to their PhD programme requirements. This situation is pushing some these young scholars explore alternative ways to get published in impact factor journals and they seek to publish by any means and often by any costs (which means even paying large sum of money for a publication). This in turn, creates a myth in the scholars’ circles in Kazakhstan, that to get published in impact factor journals, one should necessarily pay much money. This paper offers some policy recommendations on how to improve preparation of future PhD candidates in Kazakhstan.Keywords: Bologna process, impact-factor publications, post-graduate education, Kazakhstan
Procedia PDF Downloads 3791252 The Influence of Modernity and Globalization upon Language: The Korean Language between Confucianism and Americanization
Authors: Raluca-Ioana Antonescu
Abstract:
The field research of the paper stands at the intersection between Linguistics and Sociology, while the problem of the research is the importance of language in the modernization process and in a globalized society. The research objective is to prove that language is a stimulant for modernity, while it defines the tradition and the culture of a specific society. In order to examine the linguistic change of the Korean language due to the modernity and globalization, the paper tries to answer one main question, What are the changes the Korean language underwent from a traditional version of Korean, towards one influenced by modernity?, and two secondary questions, How are explored in specialized literature the relations between globalization (and modernity) and culture (focusing on language)? and What influences the Korean language? For the purpose of answering the research questions, the paper has the main premise that due to modernity and globalization, the Korean language changed its discourse construction, and two secondary hypothesis, first is that in literature there are not much explored the relations between culture and modernity focusing on the language discourse construction, but more about identity issue and commodification problems, and the second hypothesis is that the Korean language is influenced by traditional values (like Confucianism) while receiving influence also of globalization process (especially from English language). In terms of methodology, the paper will analyze the two main influences upon the Korean language, referring to traditionalism (being defined as the influence of Confucianism) and modernism (as the influence of other countries’ language and culture), and how the Korean language it was constructed and modified due to these two elements. The paper will analyze at what level (grammatical, lexical, etc.) the traditionalism help at the construction of the Korean language, and what are the changes at each level that modernism brought along. As for the results of this research, the influence of modernism changed both lexically and grammatically the Korean language. In 60 years the increase of English influence is astonishing, and this paper shows the main changes the Korean language underwent, like the loanwords (Konglish), but also the reduction of the speech levels and the ease of the register variation use. Therefore the grammatical influence of modernity and globalization could be seen at the reduction of the speech level and register variation, while the lexical change comes with the influence of English language especially, where about 10% of the Korean vocabulary is considered to be loanwords. Also the paper presents the interrelation between traditionalism and modernity, with the example of Konglish, but not only (we can consider also the Korean greetings which are translated by Koreans when they speak in other languages, bringing their cultural characteristics in English discourse construction), which makes the Koreans global, since they speak in an international language, but still local since they cannot get rid completely of their culture.Keywords: Confucianism, globalization, language and linguistic change, modernism, traditionalism
Procedia PDF Downloads 2031251 Comparative Analysis of Feature Extraction and Classification Techniques
Authors: R. L. Ujjwal, Abhishek Jain
Abstract:
In the field of computer vision, most facial variations such as identity, expression, emotions and gender have been extensively studied. Automatic age estimation has been rarely explored. With age progression of a human, the features of the face changes. This paper is providing a new comparable study of different type of algorithm to feature extraction [Hybrid features using HAAR cascade & HOG features] & classification [KNN & SVM] training dataset. By using these algorithms we are trying to find out one of the best classification algorithms. Same thing we have done on the feature selection part, we extract the feature by using HAAR cascade and HOG. This work will be done in context of age group classification model.Keywords: computer vision, age group, face detection
Procedia PDF Downloads 3681250 A Modest Proposal for Deep-Sixing Propositions in the Philosophy of Language
Authors: Patrick Duffley
Abstract:
Hanks (2021) identifies three Frege-inspired commitments concerning propositions that are widely shared across the philosophy of language: (1) propositions are the primary, inherent bearers of representational properties and truth-conditions; (2) propositions are neutral representations possessing a ‘content’ that is devoid of ‘force; (3) propositions can be entertained or expressed without being asserted. Hanks then argues that the postulate of neutral content must be abandoned, and the primary bearers of truth-evaluable representation must be identified as the token acts of assertoric predication that people perform when they are thinking or speaking about the world. Propositions are ‘types of acts of predication, which derive their representational features from their tokens.’ Their role is that of ‘classificatory devices that we use for the purposes of identifying and individuating mental states and speech acts,’ so that ‘to say that Russell believes that Mont Blanc is over 4000 meters high is to classify Russell’s mental state under a certain type, and thereby distinguish that mental state from others that Russell might possess.’ It is argued in this paper that there is no need to classify an utterance of 'Russell believes that Mont Blanc is over 4000 meters high' as a token of some higher-order utterance-type in order to identify what Russell believes; the meanings of the words themselves and the syntactico-semantic relations between them are sufficient. In our view what Hanks has accomplished in effect is to build a convincing argument for dispensing with propositions completely in the philosophy of language. By divesting propositions of the role of being the primary bearers of representational properties and truth-conditions and fittingly transferring this role to the token acts of predication that people perform when they are thinking or speaking about the world, he has situated truth in its proper place and obviated any need for abstractions like propositions to explain how language can express things that are true. This leaves propositions with the extremely modest role of classifying mental states and speech acts for the purposes of identifying and individuating them. It is demonstrated here however that there is no need whatsoever to posit such abstract entities to explain how people identify and individuate such states/acts. We therefore make the modest proposal that the term ‘proposition’ be stricken from the vocabulary of philosophers of language.Keywords: propositions, truth-conditions, predication, Frege, truth-bearers
Procedia PDF Downloads 661249 Predicting Success and Failure in Drug Development Using Text Analysis
Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev
Abstract:
Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.Keywords: data analysis, drug development, sentiment analysis, text-mining
Procedia PDF Downloads 1571248 Cross-Tier Collaboration between Preservice and Inservice Language Teachers in Designing Online Video-Based Pragmatic Assessment
Authors: Mei-Hui Liu
Abstract:
This paper reports the progression of language teachers’ learning to assess students’ speech act performance via online videos in a cross-tier professional growth community. This yearlong research project collected multiple data sources from several stakeholders, including 12 preservice and 4 inservice English as a foreign language (EFL) teachers, 4 English professionals, and 82 high school students. Data sources included surveys, (focus group) interviews, online reflection journals, online video-based assessment items/scores, and artifacts related to teacher professional learning. The major findings depicted the effectiveness of this proposed learning module on language teacher development in pragmatic assessment as well as its impact on student learning experience. All these teachers appreciated this professional learning experience which enhanced their knowledge in assessing students’ pragmalinguistic and sociopragmatic performance in an English speech act (i.e., making refusals). They learned how to design online video-based assessment items by attending to specific linguistic structures, semantic formula, and sociocultural issues. They further became aware of how to sharpen pragmatic instructional skills in the near future after putting theories into online assessment and related classroom practices. Additionally, data analysis revealed students’ achievement in and satisfaction with the designed online assessment. Yet, during the professional learning process most participating teachers encountered challenges in reaching a consensus on selecting appropriate video clips from available sources to present the sociocultural values in English-speaking refusal contexts. Also included was to construct test items which could testify the influence of interlanguage transfer on students’ pragmatic performance in various conversational scenarios. With pedagogical implications and research suggestions, this study adds to the increasing amount of research into integrating preservice and inservice EFL teacher education in pragmatic assessment and relevant instruction. Acknowledgment: This research project is sponsored by the Ministry of Science and Technology in the Republic of China under the grant number of MOST 106-2410-H-029-038.Keywords: cross-tier professional development, inservice EFL teachers, pragmatic assessment, preservice EFL teachers, student learning experience
Procedia PDF Downloads 2591247 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 125