Search results for: path recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2878

Search results for: path recognition

208 Honneth, Feenberg, and the Redemption of Critical Theory of Technology

Authors: David Schafer

Abstract:

Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.

Keywords: Habermas, Honneth, technology, Feenberg

Procedia PDF Downloads 197
207 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 76
206 Enhanced Anti-Inflammatory and Antioxidant Activities of Perna canaliculus Oil Extract and Low Molecular Weight Fucoidan from Undaria pinnatifida

Authors: Belgheis Ebrahimi, Jun Lu

Abstract:

In recent years, there has been a growing recognition of the potential of marine-based functional foods and combination therapies in promoting a healthy lifestyle and exploring their effectiveness in preventing or treating diseases. The combination of marine bioactive compounds or extracts offers synergistic or enhancement effects through various mechanisms, including multi-target actions, improved bioavailability, enhanced bioactivity, and mitigation of potential adverse effects. Both the green-lipped mussel (GLM) and fucoidan derived from brown seaweed are rich in bioactivities. These two, mussel and fucoidan, have not been previously formulated together. This study aims to combine GLM oil from Perna canaliculus with low molecular weight fucoidan (LMWF) extracted from Undaria pinnatifida to investigate the unique mixture’s anti-inflammatory and antioxidant properties. The cytotoxicity of individual compounds and combinations was assessed using the MTT assay in (THP-1 and RAW264.7) cell lines. The anti-inflammatory activity of mussel-fucoidan was evaluated by treating LPS-stimulated human monocyte and macrophage (THP1-1) cells. Subsequently, the inflammatory cytokines released into the supernatant of these cell lines were quantified via ELISA. Antioxidant activity was determined by using the free radical scavenging assay (DPPH). DPPH assay demonstrated that the radical scavenging activity of the combinations, particularly at concentrations exceeding 1 mg/ml, showed a significantly higher percentage of inhibition when compared to the individual component. This suggests an enhancement effect when the two compounds are combined, leading to increased antioxidant activity. In terms of immunomodulatory activity, the individual compounds exhibited distinct behaviors. GLM oil displayed a higher ability to suppress the cytokine TNF- compared to LMWF. Interestingly, the LMWF fraction, when used individually, did not demonstrate TNF- suppression. However, when combined with GLM, the TNF- suppression (anti-inflammatory) activity of the combination was better than GLM or LWMF alone. This observation underscores the potential for enhancement interactions between the two components in terms of anti-inflammatory properties. This study revealed that each individual compound, LMWF, and GLM, possesses unique and notable bioactivity. The combination of these two individual compounds results in an enhancement effect, where the bioactivity of each is enhanced, creating a superior combination. This suggests that the combination of LMWF and GLM has the potential to offer a more potent and multifaceted therapeutic effect, particularly in the context of antioxidant and anti-inflammatory activities. These findings hold promise for the development of novel therapeutic interventions or supplements that harness the enhancement effects.

Keywords: combination, enhancement effect, perna canaliculus, undaria pinnatifida

Procedia PDF Downloads 81
205 Improved Elastoplastic Bounding Surface Model for the Mathematical Modeling of Geomaterials

Authors: Andres Nieto-Leal, Victor N. Kaliakin, Tania P. Molina

Abstract:

The nature of most engineering materials is quite complex. It is, therefore, difficult to devise a general mathematical model that will cover all possible ranges and types of excitation and behavior of a given material. As a result, the development of mathematical models is based upon simplifying assumptions regarding material behavior. Such simplifications result in some material idealization; for example, one of the simplest material idealization is to assume that the material behavior obeys the elasticity. However, soils are nonhomogeneous, anisotropic, path-dependent materials that exhibit nonlinear stress-strain relationships, changes in volume under shear, dilatancy, as well as time-, rate- and temperature-dependent behavior. Over the years, many constitutive models, possessing different levels of sophistication, have been developed to simulate the behavior geomaterials, particularly cohesive soils. Early in the development of constitutive models, it became evident that elastic or standard elastoplastic formulations, employing purely isotropic hardening and predicated in the existence of a yield surface surrounding a purely elastic domain, were incapable of realistically simulating the behavior of geomaterials. Accordingly, more sophisticated constitutive models have been developed; for example, the bounding surface elastoplasticity. The essence of the bounding surface concept is the hypothesis that plastic deformations can occur for stress states either within or on the bounding surface. Thus, unlike classical yield surface elastoplasticity, the plastic states are not restricted only to those lying on a surface. Elastoplastic bounding surface models have been improved; however, there is still need to improve their capabilities in simulating the response of anisotropically consolidated cohesive soils, especially the response in extension tests. Thus, in this work an improved constitutive model that can more accurately predict diverse stress-strain phenomena exhibited by cohesive soils was developed. Particularly, an improved rotational hardening rule that better simulate the response of cohesive soils in extension. The generalized definition of the bounding surface model provides a convenient and elegant framework for unifying various previous versions of the model for anisotropically consolidated cohesive soils. The Generalized Bounding Surface Model for cohesive soils is a fully three-dimensional, time-dependent model that accounts for both inherent and stress induced anisotropy employing a non-associative flow rule. The model numerical implementation in a computer code followed an adaptive multistep integration scheme in conjunction with local iteration and radial return. The one-step trapezoidal rule was used to get the stiffness matrix that defines the relationship between the stress increment and the strain increment. After testing the model in simulating the response of cohesive soils through extensive comparisons of model simulations to experimental data, it has been shown to give quite good simulations. The new model successfully simulates the response of different cohesive soils; for example, Cardiff Kaolin, Spestone Kaolin, and Lower Cromer Till. The simulated undrained stress paths, stress-strain response, and excess pore pressures are in very good agreement with the experimental values, especially in extension.

Keywords: bounding surface elastoplasticity, cohesive soils, constitutive model, modeling of geomaterials

Procedia PDF Downloads 315
204 Attachment Theory and Quality of Life: Grief Education and Training

Authors: Jane E. Hill

Abstract:

Quality of life is an important component for many. With that in mind, everyone will experience some type of loss within his or her lifetime. A person can experience loss due to break up, separation, divorce, estrangement, or death. An individual may experience loss of a job, loss of capacity, or loss caused by human or natural-caused disasters. An individual’s response to such a loss is unique to them, and not everyone will seek services to assist them with their grief due to loss. Counseling can promote positive outcomes for clients that are grieving by addressing the client’s personal loss and helping the client process their grief. However, a lack of understanding on the part of counselors of how people grieve may result in negative client outcomes such as poor health, psychological distress, or an increased risk of depression. Education and training in grief counseling can improve counselors’ problem recognition and skills in treatment planning. The purpose of this study was to examine whether the Council for Accreditation of Counseling and Related Educational Programs (CACREP) master’s degree counseling students view themselves as having been adequately trained in grief theories and skills. Many people deal with grief issues that prevent them from having joy or purpose in their lives and that leaves them unable to engage in positive opportunities or relationships. This study examined CACREP-accredited master’s counseling students’ self-reported competency, training, and education in providing grief counseling. The implications for positive social change arising from the research may be to incorporate and promote education and training in grief theories and skills in a majority of counseling programs and to provide motivation to incorporate professional standards for grief training and practice in the mental health counseling field. The theoretical foundation used was modern grief theory based on John Bowlby’s work on Attachment Theory. The overall research question was how competent do master’s-level counselors view themselves regarding the education or training they received in grief theories or counseling skills in their CACREP-accredited studies. The author used a non-experimental, one shot survey comparative quantitative research design. Cicchetti’s Grief Counseling Competency Scale (GCCS) was administered to CACREP master’s-level counseling students enrolled in their practicum or internship experience, which resulted in 153 participants. Using a MANCOVA, there was significance found for relationships between coursework taken and (a) perceived assessment skills (p = .029), (b) perceived treatment skills (p = .025), and (c) perceived conceptual skills and knowledge (p = .003). Results of this study provided insight for CACREP master’s-level counseling programs to explore and discuss curriculum coursework inclusion of education and training in grief theories and skills.

Keywords: counselor education and training, grief education and training, grief and loss, quality of life

Procedia PDF Downloads 191
203 Energy Strategies for Long-Term Development in Kenya

Authors: Joseph Ndegwa

Abstract:

Changes are required if energy systems are to foster long-term growth. The main problems are increasing access to inexpensive, dependable, and sufficient energy supply while addressing environmental implications at all levels. Policies can help to promote sustainable development by providing adequate and inexpensive energy sources to underserved regions, such as liquid and gaseous fuels for cooking and electricity for household and commercial usage. Promoting energy efficiency. Increased utilization of new renewables. Spreading and implementing additional innovative energy technologies. Markets can achieve many of these goals with the correct policies, pricing, and regulations. However, if markets do not work or fail to preserve key public benefits, tailored government policies, programs, and regulations can achieve policy goals. The main strategies for promoting sustainable energy systems are simple. However, they need a broader recognition of the difficulties we confront, as well as a firmer commitment to specific measures. Making markets operate better by minimizing pricing distortions, boosting competition, and removing obstacles to energy efficiency are among the measures. Complementing the reform of the energy industry with policies that promote sustainable energy. Increasing investments in renewable energy. Increasing the rate of technical innovation at each level of the energy innovation chain. Fostering technical leadership in underdeveloped nations by transferring technology and enhancing institutional and human capabilities. promoting more international collaboration. Governments, international organizations, multilateral financial institutions, and civil society—including local communities, business and industry, non-governmental organizations (NGOs), and consumers—all have critical enabling roles to play in the problem of sustainable energy. Partnerships based on integrated and cooperative approaches and drawing on real-world experience will be necessary. Setting the required framework conditions and ensuring that public institutions collaborate effectively and efficiently with the rest of society are common themes across all industries and geographical areas in order to achieve sustainable development. Powerful tools for sustainable development include energy. However, significant policy adjustments within the larger enabling framework will be necessary to refocus its influence in order to achieve that aim. Many of the options currently accessible will be lost or the price of their ultimate realization (where viable) will grow significantly if such changes don't take place during the next several decades and aren't started right enough. In any case, it would seriously impair the capacity of future generations to satisfy their demands.

Keywords: sustainable development, reliable, price, policy

Procedia PDF Downloads 65
202 Restructurasation of the Concept of Empire in the Social Consciousness of Modern Americans

Authors: Maxim Kravchenko

Abstract:

The paper looks into the structure and contents of the concept of empire in the social consciousness of modern Americans. To construct the model of this socially and politically relevant concept we have conducted an experiment with respondents born and living in the USA. Empire is seen as a historic notion describing such entities as the British empire, the Russian empire, the Ottoman empire and others. It seems that the democratic regime adopted by most countries worldwide is incompatible with imperial status of a country. Yet there are countries which tend to dominate in the contemporary world and though they are not routinely referred to as empires, in many respects they are reminiscent of historical empires. Thus, the central hypothesis of the study is that the concept of empire is cultivated in some states through the intermediary of the mass media though it undergoes a certain transformation to meet the expectations of a democratic society. The transformation implies that certain components which were historically embedded in its structure are drawn to the margins of the hierarchical structure of the concept whereas other components tend to become central to the concept. This process can be referred to as restructuration of the concept of empire. To verify this hypothesis we have conducted a study which falls into two stages. First we looked into the definition of empire featured in dictionaries, the dominant conceptual components of empire are: importance, territory/lands, recognition, independence, authority/power, supreme/absolute. However, the analysis of 100 articles from American newspapers chosen at random revealed that authors rarely use the word «empire» in its basic meaning (7%). More often «empire» is used when speaking about countries, which no longer exist or when speaking about some corporations (like Apple or Google). At the second stage of the study we conducted an associative experiment with the citizens of the USA aged 19 to 45. The purpose of the experiment was to find out the dominant components of the concept of empire and to construct the model of the transformed concept. The experiment stipulated that respondents should give the first association, which crosses their mind, on reading such stimulus phrases as “strong military”, “strong economy” and others. The list of stimuli features various words and phrases associated with empire including the words representing the dominant components of the concept of empire. Then the associations provided by the respondents were classified into thematic clusters. For instance, the associations to the stimulus “strong military” were compartmentalized into three groups: 1) a country with strong military forces (North Korea, the USA, Russia, China); 2) negative impression of strong military (war, anarchy, conflict); 3) positive impression of strong military (peace, safety, responsibility). The experiment findings suggest that the concept of empire is currently undergoing a transformation which brings about a number of changes. Among them predominance of positively assessed components of the concept; emergence of two poles in the structure of the concept, that is “hero” vs. “enemy”; marginalization of any negatively assessed components.

Keywords: associative experiment, conceptual components, empire, restructurasation of the concept

Procedia PDF Downloads 314
201 Acquisition of Murcian Lexicon and Morphology by L2 Spanish Immigrants: The Role of Social Networks

Authors: Andrea Hernandez Hurtado

Abstract:

Research on social networks (SNs) -- the interactions individuals share with others has shed important light in helping to explain differential use of variable linguistic forms, both in L1s and L2s. Nevertheless, the acquisition of nonstandard L2 Spanish in the Region of Murcia, Spain, and how learners interact with other speakers while sojourning there have received little attention. Murcian Spanish (MuSp) was widely influenced by Panocho, a divergent evolution of Hispanic Latin, and differs from the more standard Peninsular Spanish (StSp) in phonology, morphology, and lexicon. For instance, speakers from this area will most likely palatalize diminutive endings, producing animalico [̩a.ni.ma.ˈli.ko] instead of animalito [̩a.ni.ma.ˈli.to] ‘little animal’. Because L1 speakers of the area produce and prefer salient regional lexicon and morphology (particularly the palatalized diminutive -ico) in their speech, the current research focuses on how international residents in the Region of Murcia use Spanish: (1) whether or not they acquire (perceptively and/or productively) any of the salient regional features of MuSp, and (2) how their SNs explain such acquisition. This study triangulates across three tasks -recognition, production, and preference- addressing both lexicon and morphology, with each task specifically created for the investigation of MuSp features. Among other variables, the effects of L1, residence, and identity are considered. As an ongoing dissertation research, data are currently being gathered through an online questionnaire. So far, 7 participants from multiple nationalities have completed the survey, although a minimum of 25 are expected to be included in the coming months. Preliminary results revealed that MuSp lexicon and morphology were successfully recognized by participants (p<.001). In terms of regional lexicon production (10.0%) and preference (47.5%), although participants showed higher percentages of StSp, results showed that international residents become aware of stigmatized lexicon and may incorporate it into their language use. Similarly, palatalized diminutives (production 14.2%, preference 19.0%) were present in their responses. The Social Network Analysis provided information about participants’ relationships with their interactants, as well as among them. Results indicated that, generally, when residents were more immersed in the culture (i.e., had more Murcian alters) they produced and preferred more regional features. This project contributes to the knowledge of language variation acquisition in L2 speakers, focusing on a stigmatized Spanish dialect and exploring how stigmatized varieties may affect L2 development. Results will show how L2 Spanish speakers’ language is affected by their stay in Murcia. This, in turn, will shed light on the role of SNs in language acquisition, the acquisition of understudied and marginalized varieties, and the role of immersion on language acquisition. As the first systematic account on the acquisition of L2 Spanish lexicon and morphology in the Region of Murcia, it lays important groundwork for further research on the connection between SNs and the acquisition of regional variants, applicable to Murcia and beyond.

Keywords: international residents, L2 Spanish, lexicon, morphology, nonstandard language acquisition, social networks

Procedia PDF Downloads 77
200 An Eco-Systemic Typology of Fashion Resale Business Models in Denmark

Authors: Mette Dalgaard Nielsen

Abstract:

The paper serves the purpose of providing an eco-systemic typology of fashion resale business models in Denmark while pointing to possibilities to learn from its wisdom during a time when a fundamental break with the dominant linear fashion paradigm has become inevitable. As we transgress planetary boundaries and can no longer continue the unsustainable path of over-exploiting the Earth’s resources, the global fashion industry faces a tremendous need for change. One of the preferred answers to the fashion industry’s sustainability crises lies in the circular economy, which aims to maximize the utilization of resources by keeping garments in use for longer. Thus, in the context of fashion, resale business models that allow pre-owned garments to change hands with the purpose of being reused in continuous cycles are considered to be among the most efficient forms of circularity. Methodologies: The paper is based on empirical data from an ongoing project and a series of qualitative pilot studies that have been conducted on the Danish resale market over a 2-year time period from Fall 2021 to Fall 2023. The methodological framework is comprised of (n) ethnography and fieldwork in selected resale environments, as well as semi-structured interviews and a workshop with eight business partners from the Danish fashion and textiles industry. By focusing on the real-world circulation of pre-owned garments, which is enabled by the identified resale business models, the research lets go of simplistic hypotheses to the benefit of dynamic, vibrant and non-linear processes. As such, the paper contributes to the emerging research field of circular economy and fashion, which finds itself in a critical need to move from non-verified concepts and theories to empirical evidence. Findings: Based on the empirical data and anchored in the business partners, the paper analyses and presents five distinct resale business models with different product, service and design characteristics. These are 1) branded resale, 2) trade-in resale, 3) peer-2-peer resale, 4) resale boutiques and consignment shops and 5) resale shelf/square meter stores and flea markets. Together, the five business models represent a plurality of resale-promoting business model design elements that have been found to contribute to the circulation of pre-owned garments in various ways for different garments, users and businesses in Denmark. Hence, the provided typology points to the necessity of prioritizing several rather than single resale business model designs, services and initiatives for the resale market to help reconfigure the linear fashion model and create a circular-ish future. Conclusions: The article represents a twofold research ambition by 1) presenting an original, up-to-date eco-systemic typology of resale business models in Denmark and 2) using the typology and its eco-systemic traits as a tool to understand different business model design elements and possibilities to help fashion grow out of its linear growth model. By basing the typology on eco-systemic mechanisms and actual exemplars of resale business models, it becomes possible to envision the contours of a genuine alternative to business as usual that ultimately helps bend the linear fashion model towards circularity.

Keywords: circular business models, circular economy, fashion, resale, strategic design, sustainability

Procedia PDF Downloads 59
199 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
198 Multi-Objective Optimization of Assembly Manufacturing Factory Setups

Authors: Andreas Lind, Aitor Iriondo Pascual, Dan Hogberg, Lars Hanson

Abstract:

Factory setup lifecycles are most often described and prepared in CAD environments; the preparation is based on experience and inputs from several cross-disciplinary processes. Early in the factory setup preparation, a so-called block layout is created. The intention is to describe a high-level view of the intended factory setup and to claim area reservations and allocations. Factory areas are then blocked, i.e., targeted to be used for specific intended resources and processes, later redefined with detailed factory setup layouts. Each detailed layout is based on the block layout and inputs from cross-disciplinary preparation processes, such as manufacturing sequence, productivity, workers’ workplace requirements, and resource setup preparation. However, this activity is often not carried out with all variables considered simultaneously, which might entail a risk of sub-optimizing the detailed layout based on manual decisions. Therefore, this work aims to realize a digital method for assembly manufacturing layout planning where productivity, area utilization, and ergonomics can be considered simultaneously in a cross-disciplinary manner. The purpose of the digital method is to support engineers in finding optimized designs of detailed layouts for assembly manufacturing factories, thereby facilitating better decisions regarding setups of future factories. Input datasets are company-specific descriptions of required dimensions for specific area reservations, such as defined dimensions of a worker’s workplace, material façades, aisles, and the sequence to realize the product assembly manufacturing process. To test and iteratively develop the digital method, a demonstrator has been developed with an adaptation of existing software that simulates and proposes optimized designs of detailed layouts. Since the method is to consider productivity, ergonomics, area utilization, and constraints from the automatically generated block layout, a multi-objective optimization approach is utilized. In the demonstrator, the input data are sent to the simulation software industrial path solutions (IPS). Based on the input and Lua scripts, the IPS software generates a block layout in compliance with the company’s defined dimensions of area reservations. Communication is then established between the IPS and the software EPP (Ergonomics in Productivity Platform), including intended resource descriptions, assembly manufacturing process, and manikin (digital human) resources. Using multi-objective optimization approaches, the EPP software then calculates layout proposals that are sent iteratively and simulated and rendered in IPS, following the rules and regulations defined in the block layout as well as productivity and ergonomics constraints and objectives. The software demonstrator is promising. The software can handle several parameters to optimize the detailed layout simultaneously and can put forward several proposals. It can optimize multiple parameters or weight the parameters to fine-tune the optimal result of the detailed layout. The intention of the demonstrator is to make the preparation between cross-disciplinary silos transparent and achieve a common preparation of the assembly manufacturing factory setup, thereby facilitating better decisions.

Keywords: factory setup, multi-objective, optimization, simulation

Procedia PDF Downloads 150
197 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization

Authors: Gerald Gwamba

Abstract:

Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.

Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments

Procedia PDF Downloads 131
196 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 107
195 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 219
194 Healing (in) Relationship: The Theory and Practice of Inner-Outer Peacebuilding in North-Western India

Authors: Josie Gardner

Abstract:

The overall intention of this research is to reimagine peacebuilding in both in theory and practical application in light of the shortcomings and unsustainability of the current peacebuilding paradigm. These limitations are identified here as an overly rational-material approach to peacebuilding that neglects the inner dimension of peace for a fragmented rather than holistic model, and that espouses a conflict and violence-centric approach to peacebuilding. In counter, this presentation is purposed to investigate the dynamics of inner and outer peace as a holistic, complex system towards ‘inner-outer’ peacebuilding. This paper draws from primary research in the protracted conflict context of north-western India (Jammu, Kashmir & Ladakh) as a case study. This presentation has two central aims. First, to introduce the process of inner (psycho-spiritual) peacebuilding, which has thus far been neglected by mainstream and orthodox literature. Second, to examine why inner peacebuilding is essential for realising sustainable peace on a broader scale as outer (socio-political) peace and to better understand how the inner and outer dynamics of peace relate and affect one another. To these ends, Josephine (the researcher/author/presenter) partnered with Yakjah Reconciliation and Development Network to implement a series of action-oriented workshops and retreats centred around healing, reconciliation, leadership, and personal development for the dual purpose of collaboratively generating data, theory, and insights, as well as providing the youth leaders with an experiential, transformative experience. The research team created and used a novel methodological approach called Mapping Ritual Ecologies, which draws from Participatory Action Research and Digital Ethnography to form a collaborative research model with a group of 20 youth co-researchers who are emerging youth peace leaders in Kashmir, Jammu, and Ladakh. This research found significant intra- and inter-personal shifts towards an experience of inner peace through inner peacebuilding activities. Moreover, this process of inner peacebuilding affected their families and communities through interpersonal healing and peace leadership in an inside-out process of change. These insights have generated rich insights and have supported emerging theories about the dynamics between inner and outer peace, power, justice, and collective healing. This presentation argues that the largely neglected dimension of inner (psycho-spiritual) peacebuilding is imperative for broader socio-political (outer) change. Changing structures of oppression, injustice, and violence—i.e. structures of separation—requires individual, interpersonal, and collective healing. While this presentation primarily examines and advocates for inside-out peacebuilding and social justice, it will also touch upon the effect of systems of separation on the inner condition and human experience. This research reimagines peacebuilding as a holistic inner-outer approach. This offers an alternative path forward those weaves together self-actualisation and social justice. While contextualised within north-western India with a small case study population, the findings speak also to other conflict contexts as well as our global peacebuilding and social justice milieu.

Keywords: holistic, inner peacebuilding, psycho-spiritual, systems youth

Procedia PDF Downloads 120
193 CSR Communication Strategies: Stakeholder and Institutional Theories Perspective

Authors: Stephanie Gracelyn Rahaman, Chew Yin Teng, Manjit Singh Sandhu

Abstract:

Corporate scandals have made stakeholders apprehensive of large companies and expect greater transparency in CSR matters. However, companies find it challenging to strategically communicate CSR to intended stakeholders and in the process may fall short on maximizing on CSR efforts. Given that stakeholders have the ability to either reward good companies or take legal action or boycott against corporate brands who do not act socially responsible, companies must create shared understanding of their CSR activities. As a result, communication has become a strategy for many companies to demonstrate CSR engagement and to minimize stakeholder skepticism. The main objective of this research is to examine the types of CSR communication strategies and predictors that guide CSR communication strategies. Employing Morsing & Schultz’s guide on CSR communication strategies, the study integrates stakeholder and institutional theory to develop a conceptual framework. The conceptual framework hypothesized that stakeholder (instrumental and normative) and institutional (regulatory environment, nature of business, mimetic intention, CSR focus and corporate objectives) dimensions would drive CSR communication strategies. Preliminary findings from semi-structured interviews in Malaysia are consistent with the conceptual model in that stakeholder and institutional expectations guide CSR communication strategies. Findings show that most companies use two-way communication strategies. Companies that identified employees, the public or customers as key stakeholders have started to embrace social media to be in-sync with new trends of communication. This is especially with the Gen Y which is their priority. Some companies creatively use multiple communication channels because they recognize different stakeholders favor different communication channels. Therefore, it appears that companies use two-way communication strategies to complement the perceived limitation of one-way communication strategies as some companies prefer a more interactive platform to strategically engage stakeholders in CSR communication. In addition to stakeholders, institutional expectations also play a vital role in influencing CSR communication. Due to industry peer pressures, corporate objectives (attract international investors and customers), companies may be more driven to excel in social performance. For these reasons companies tend to go beyond the basic mandatory requirement, excel in CSR activities and be known as companies that champion CSR. In conclusion, companies use more two-way than one-way communication and companies use a combination of one and two-way communication to target different stakeholders resulting from stakeholder and institutional dimensions. Finally, in order to find out if the conceptual framework actually fits the Malaysian context, companies’ responses for expected organizational outcomes from communicating CSR were gathered from the interview transcripts. Thereafter, findings are presented to show some of the key organizational outcomes (visibility and brand recognition, portray responsible image, attract prospective employees, positive word-of-mouth, etc.) that companies in Malaysia expect from CSR communication. Based on these findings the conceptual framework has been refined to show the new identified organizational outcomes.

Keywords: CSR communication, CSR communication strategies, stakeholder theory, institutional theory, conceptual framework, Malaysia

Procedia PDF Downloads 289
192 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials

Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs

Abstract:

Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.

Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties

Procedia PDF Downloads 172
191 A Study of the Challenges in Adoption of Renewable Energy in Nigeria

Authors: Farouq Sule Garo, Yahaya Yusuf

Abstract:

The purpose of this study is to investigate why there is a general lack of successful adoption of sustainable energy in Nigeria. This is particularly important given the current global campaign for net-zero emissions. The 26th United Nations Conference of the Parties (COP26), held in 2021, was hosted by the UK, in Glasgow, where, amongst other things, countries including Nigeria agreed to a zero emissions pact. There is, therefore, an obligation on the part of Nigeria for transition from fossil fuel-based economy to a sustainable net-zero emissions economy. The adoption of renewable energy is fundamental to achieving this ambitious target if decarbonisation of economic activities were to become a reality. Nigeria has an abundance of sources of renewable energy and yet there has been poor uptake and where attempts have been made to develop and harness renewable energy resources, there has been limited success. It is not entirely clear why this is the case. When analysts allude to corruption as the reason for failure for successful adoption of renewable energy or project implementation, it is arguable that corruption alone cannot explain the situation. Therefore, there is the need for a thorough investigation into the underlying issues surrounding poor uptake of renewable energy in Nigeria. This pilot study, drawing upon stakeholders’ theory, adopts a multi-stakeholder’ perspectives to investigate the influence and impacts of economic, political, technological, social factors in adoption of renewable energy in Nigeria. The research will also investigate how these factors shape (or fail to shape) strategies for achieving successful adoption of renewable energy in the country. A qualitative research methodology has been adopted given the nature of the research requiring in-depth studies in specific settings rather than a general population survey. There will be a number of interviews and each interview will allow thorough probing of sources. This, in addition to the six interviews that have already been conducted, primarily focused on economic dimensions of the challenges in adoption of renewable energy. The six participants in these initial interviews were all connected to the Katsina Wind Farm Project that was conceived and built with the view to diversifying Nigeria's energy mix and capitalise on the vast wind energy resources in the northern region. The findings from the six interviews provide insights into how the economic factors impacts on the wind farm project. Some key drivers have been identified, including strong governmental support and the recognition of the need for energy diversification. These drivers have played crucial roles in initiating and advancing the Katsina Wind Farm Project. In addition, the initial analysis has highlighted various challenges encountered during the project's implementation, including financial, regulatory, and environmental aspects. These challenges provide valuable lessons that can inform strategies to mitigate risks and improve future wind energy projects.

Keywords: challenges in adoption of renewable energy, economic factors, net-zero emission, political factors

Procedia PDF Downloads 39
190 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 289
189 A Computational Approach to Screen Antagonist’s Molecule against Mycobacterium tuberculosis Lipoprotein LprG (Rv1411c)

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Tuberculosis (TB) caused by bacillus Mycobacterium tuberculosis (Mtb) continues to take a disturbing toll on human life and healthcare facility worldwide. The global burden of TB remains enormous. The alarming rise of multi-drug resistant strains of Mycobacterium tuberculosis calls for an increase in research efforts towards the development of new target specific therapeutics against diverse strains of M. tuberculosis. Therefore, the discovery of new molecular scaffolds targeting new drug sites should be a priority for a workable plan for fighting resistance in Mycobacterium tuberculosis (Mtb). Mtb non-acylated lipoprotein LprG (Rv1411c) has a Toll-like receptor 2 (TLR2) agonist actions that depend on its association with triacylated glycolipids binding specifically with the hydrophobic pocket of Mtb LprG lipoprotein. The detection of a glycolipid carrier function has important implications for the role of LprG in Mycobacterial physiology and virulence. Therefore, considering the pivotal role of glycolipids in mycobacterial physiology and host-pathogen interactions, designing competitive antagonist (chemotherapeutics) ligands that competitively bind to glycolipid binding domain in LprG lipoprotein, will lead to inhibition of tuberculosis infection in humans. In this study, a unified approach involving ligand-based virtual screening protocol USRCAT (Ultra Shape Recognition) software and molecular docking studies using Auto Dock Vina 1.1.2 using the X-ray crystal structure of Mtb LprG protein was implemented. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the Ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has the higher hypothetical affinity, also has greater negative value. Based on the USRCAT, Lipinski’s values and molecular docking results, [(2R)-2,3-di(hexadecanoyl oxy)propyl][(2S,3S,5S,6R)-3,4,5-trihydroxy-2,6-bis[[(2R,3S,4S,5R,6S)-3,4,5-trihydroxy-6 (hydroxymethyl)tetrahydropyran-2-yl]oxy]cyclohexyl] phosphate (XPX) was confirmed as a promising drug-like lead compound (antagonist) binding specifically to the hydrophobic domain of LprG protein with affinity greater than that of PIM2 (agonist of LprG protein) with a free binding energy of -9.98e+006 Kcal/mol and binding affinity of -132 Kcal/mol, respectively. A further, in vitro assay of this compound is required to establish its potency in inhibiting molecular evasion mechanism of MTB within the infected host macrophages. These results will certainly be helpful in future anti-TB drug discovery efforts against Multidrug-Resistance Tuberculosis (MDR-TB).

Keywords: antagonist, agonist, binding affinity, chemotherapeutics, drug-like, multi drug resistance tuberculosis (MDR-TB), RV1411c protein, toll-like receptor (TLR2)

Procedia PDF Downloads 271
188 ‘Green Gait’ – The Growing Relevance of Podiatric Medicine amid Climate Change

Authors: Angela Evans, Gabriel Gijon-Nogueron, Alfonso Martinez-Nova

Abstract:

Background The health sector, whose mission is protecting health, also contributes to the climate crisis, the greatest health threat of the 21st century. The carbon footprint from healthcare exceeds 5% of emissions globally, surpassing 7% in the USA and Australia. Global recognition has led to the Paris Agreement, the United Nations Sustainable Development Goals, and the World Health Organization's Climate Change Action Plan. It is agreed that the majority of health impacts stem from energy and resource consumption, as well as the production of greenhouse gases in the environment and deforestation. Many professional medical associations and healthcare providers advocate for their members to take the lead in environmental sustainability. Objectives To avail and expand ‘Green Podiatry’ via the three pillars of: Exercise ; Evidence ; Everyday changes; to highlight the benefits of physical activity and exercise for both human health and planet health. Walking and running are beneficial for health, provide low carbon transport, and have evidence-based health benefits. Podiatrists are key healthcare professionals in the physical activity space and can influence and guide their patients to increase physical activity and avert the many non-communicable diseases that are decimating public health, eg diabetes, arthritis, depression, cancer, obesity. Methods Publications, conference presentations, and pilot projects pertinent to ‘Green Podiatry’ have been activated since 2021, and a survey of podiatrist’s knowledge and awareness has been undertaken.The survey assessed attitudes towards environmental sustainability in work environment. The questions addressed commuting habits, hours of physical exercise per week, and attitudes in the clinic, such as prescribing unnecessary treatments or emphasizing sports as primary treatment. Results Teaching and Learning modules have been developed for podiatric medicine students and graduates globally. These will be availed. A pilot foot orthoses recycling project has been undertaken and will be reported, in addition to established footwear recycling. The preliminary survey found almost 90% of respondents had no knowledge of green podiatry or footwear recycling. Only 30% prescribe sports/exercise as the primary treatment for patients, and 45% do not to prescribe unnecessary treatments. Conclusions Podiatrists are in a good position to lead in the crucial area of healthcare and climate change implications. Sufficient education of podiatrists is essential for the profession to beneficially promote health and physical activity, which is beneficial for the health of all peoples and all communities.

Keywords: climate change, gait, green, healthcare, sustainability

Procedia PDF Downloads 90
187 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 63
186 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 140
185 Cultural Identity and Self-Censorship in Social Media: A Qualitative Case Study

Authors: Nastaran Khoshsabk

Abstract:

The evolution of communication through the Internet has influenced shaping and reshaping the self-presentation of social media users. Online communities both connect people and give voice to the voiceless allowing them to present themselves nationally and globally. People all around the world are experiencing censorship in different aspects of their life. Censorship can be externally imposed because of the political situations, or it can be self-imposed. Social media users choose the content they want to share and decide about the online audiences with whom they want to share this content. Most social media networks, such as Facebook, enable their users to be selective about the shared content and its availability to other people. However, sometimes instead of targeting a specific audience, users self-censor themselves or decide not to share various forms of information. These decisions are of particular importance in countries such as Iran where Internet is not the arena of free self-presentation and people are encouraged to stay away from political participation in the country and acting against the Islamic values. Facebook and some other social media tools are blocked in countries such as Iran. This project investigates the importance of social media in the life of Iranians to explore how they present themselves and construct their digital selves. The notion of cultural identity is applied in this research to explore the educational and informative role of social media in the identity formation and cultural representation of Facebook users. This study explores the self-censorship of Iranian adult Facebook users through their online self-representation and communication on the Internet. The data in this qualitative multiple case study have been collected through individual synchronous online interviews with the researcher’s Facebook friends and through the analysis of the participants’ Facebook profiles and activities over a period of six months. The data is analysed with an emphasis on the identity formation of participants through the recognition of the underlying themes. The exploration of online interviews is on the basis of participants’ personal accounts of self-censorship and cultural understanding through using social media. The driven codes and themes have been categorised considering censorship and place of culture on representation of self. Participants were asked to explain their views about censorship and conservatism through using social media. They reported their thoughts about deciding which content to share on Facebook and which to self-censor and their reasons behind these decisions. The codes and themes have been categorised considering censorship and its role in representation of idealised self. The ‘actual self’ showed to be hidden by an individual for different reasons such as its influence on their social status, academic achievements and job opportunities. It is hoped that this research will have implications for education contexts in countries that are experiencing social media filtering by offering an increased understanding of the importance of online communities; which can provide an educational environment to talk and learn about social taboos and constructing adults’ identity in virtual environment and through cultural self-presentation.

Keywords: cultural identity, identity formation, online communities, self-censorship

Procedia PDF Downloads 237
184 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 313
183 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 174
182 Planning Fore Stress II: Study on Resiliency of New Architectural Patterns in Urban Scale

Authors: Amir Shouri, Fereshteh Tabe

Abstract:

Master planning and urban infrastructure’s thoughtful and sequential design strategies will play the major role in reducing the damages of natural disasters, war and or social/population related conflicts for cities. Defensive strategies have been revised during the history of mankind after having damages from natural depressions, war experiences and terrorist attacks on cities. Lessons learnt from Earthquakes, from 2 world war casualties in 20th century and terrorist activities of all times. Particularly, after Hurricane Sandy of New York in 2012 and September 11th attack on New York’s World Trade Centre (WTC) in 21st century, there have been series of serious collaborations between law making authorities, urban planners and architects and defence related organizations to firstly, getting prepared and/or prevent such activities and secondly, reduce the human loss and economic damages to minimum. This study will work on developing a model of planning for New York City, where its citizens will get minimum impacts in threat-full time with minimum economic damages to the city after the stress is passed. The main discussion in this proposal will focus on pre-hazard, hazard-time and post-hazard transformative policies and strategies that will reduce the “Life casualties” and will ease “Economic Recovery” in post-hazard conditions. This proposal is going to scrutinize that one of the key solutions in this path might be focusing on all overlaying possibilities on architectural platforms of three fundamental infrastructures, the transportation, the power related sources and defensive abilities on a dynamic-transformative framework that will provide maximum safety, high level of flexibility and fastest action-reaction opportunities in stressful periods of time. “Planning Fore Stress” is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in deferent cases in international scale. From the modern strategies of Copenhagen for living friendly with nature to traditional approaches of Indonesian old urban planning patterns, the “Iron Dome” of Israel to “Tunnels” in Gaza, from “Ultra-high-performance quartz-infused concrete” of Iran to peaceful and nature-friendly strategies of Switzerland, from “Urban Geopolitics” in cities, war and terrorism to “Design of Sustainable Cities” in the world, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on “New City Divisions”, “New City Planning and social activities” and “New Strategic Architecture for Safe Cities”. This study is a developed version of a proposal that was announced as winner at MoMA in 2013 in call for ideas for Rockaway after Sandy Hurricane took place.

Keywords: urban scale, city safety, natural disaster, war and terrorism, city divisions, architecture for safe cities

Procedia PDF Downloads 484
181 Coming Closer to Communities of Practice through Situated Learning: The Case Study of Polish-English, English-Polish Undergraduate BA Level Language for Specific Purposes of Translation Class

Authors: Marta Lisowska

Abstract:

The growing trend of market specialization imposes upon translators the need for proficiency in the working knowledge of specialist discourse. The notion of specialization differs from a broad general category to a highly specialized narrow field. The specialised discourse is used in the channel of communication based upon distinctive features typical for communities of practice whose co-existence is codified and hermetically locked against outsiders. Consequently, any translator deprived of professional discourse competence and social skills is incapable of providing competent translation product from source language into target language. In this paper, we report on research that explores the pedagogical practices aiming to bridge the dichotomy between the professionals and the specialist translators, while accounting for the reality of the world of professional communities entered by undergraduates on two levels: the text-based generic, and the social one. Drawing from the functional social constructivist approach, seen here as situated learning, this paper reports on the case of English-Polish, Polish-English undergraduate BA Level LSP of law translation class run in line with the simulated classroom-based and the reality-based (apprenticeship) approach. This blended method serves the purpose of introducing the young trainees to the professional world. The research provides new insights into how the LSP translation undergraduates become legitimized through discursive and social participation and engagement. The undergraduates, situated peripherally at the outset, experience their own transformation towards becoming members of these professional groups. With subjective evaluation, the trainees take a stance on this dual mode class and development of their skills. Comparing and contrasting their own work done in line with two models of translation teaching: authentic and near-authentic, the undergraduates answer research questions devised by a questionnaire survey The responses take us closer to how students feel about their LSP translation competence development. The major findings show how the trainees perceive the benefits and hardships of their functional translation class. In terms of skills, they related to communication as the most enhanced one; they highly valued the fact of being ‘exposed’ to a variety of texts (cf. multi literalism), team work, learning how to schedule work, IT skills boost and the ability to learn how to work individually. Another finding indicates that students struggled most with specialized language, and co-working with other students. The short-term research shows the momentum when the undergraduate LSP translation trainees entered the path of transformation i.e. gained consciousness of ‘how it is’ to be a participant-translator of real-life communities of practice, gaining pragmatic dint of the social and linguistic skills understood here as discursive competence (text > genre > discourse > professional practice). The undergraduates need to be aware of the work they have to do and challenges they are to face before arriving at the expert level of professional translation competence.

Keywords: communities of practice in LSP translation teaching, learning LSP translation as situated experience, peripheral participation, professional discourse for LSP translation teaching, professional translation competence

Procedia PDF Downloads 95
180 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset

Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik

Abstract:

Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.

Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics

Procedia PDF Downloads 76
179 Examining the Effect of Online English Lessons on Nursery School Children

Authors: Hidehiro Endo, Taizo Shigemichi

Abstract:

Introduction & Objectives: In 2008, the revised course of study for elementary schools was published by MEXT, and from the beginning of the academic year of 2011-2012, foreign language activities (English lessons) became mandatory for 5th and 6th graders in Japanese elementary schools. Foreign language activities are currently offered once a week for approximately 50 minutes by elementary school teachers, assistant language teachers who are native speakers of English, volunteers, among others, with the purpose of helping children become accustomed to functional English. However, the new policy has disclosed a myriad of issues in conducting foreign language activities since the majority of the current elementary school teachers has neither English teaching experience nor English proficiency. Nevertheless, converting foreign language activities into English, as a subject in Japanese elementary schools (for 5th and 6th graders) from 2020 is what MEXT currently envisages with the purpose of reforming English education in Japan. According to their new proposal, foreign language activities will be mandatory for 3rd and 4th graders from 2020. Consequently, gaining better access to English learning opportunities becomes one of the primary concerns even in early childhood education. Thus, in this project, we aim to explore some nursery schools’ attempts at providing toddlers with online English lessons via Skype. The main purpose of this project is to look deeply into what roles online English lessons in the nursery schools play in guiding nursery school children to enjoy learning the English language as well as to acquire English communication skills. Research Methods: Setting; The main research site is a nursery school located in the northern part of Japan. The nursery school has been offering a 20-minute online English lesson via Skype twice a week to 7 toddlers since September 2015. The teacher of the online English lessons is a male person who lives in the Philippines. Fieldwork & Data; We have just begun collecting data by attending the Skype English lessons. Direct observations are the principal components of the fieldwork. By closely observing how the toddlers respond to what the teacher does via Skype, we examine what components stimulate the toddlers to pay attention to the English lessons. Preliminary Findings & Expected Outcomes: Although both data collection and analysis are ongoing, we found that the online English teacher remembers the first name of each toddler and calls them by their first name via Skype, a technique that is crucial in motivating the toddlers to actively participate in the lessons. In addition, when the teacher asks the toddlers the name of a plastic object such as grapes in English, the toddlers tend to respond to the teacher in Japanese. Accordingly, the effective use of Japanese in teaching English for nursery school children need to be further examined. The anticipated results of this project are an increased recognition of the significance of creating English language learning opportunities for nursery school children and a significant contribution to the field of early childhood education.

Keywords: teaching children, English education, early childhood education, nursery school

Procedia PDF Downloads 329