Search results for: implicit features
415 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition
Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova
Abstract:
This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy
Procedia PDF Downloads 50414 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 59413 Enhancing Knowledge and Teaching Skills of Grade Two Teachers who Work with Children at Risk of Dyslexia
Authors: Rangika Perera, Shyamani Hettiarachchi, Fran Hagstrom
Abstract:
Dyslexia is the most common reading reading-related difficulty among the school school-aged population and currently, 5-10% are showing the features of dyslexia in Sri Lanka. As there is an insufficient number of speech and language pathologists in the country and few speech and language pathologists working in government mainstream school settings, these children who are at risk of dyslexia are not receiving enough quality early intervention services to develop their reading skills. As teachers are the key professionals who are directly working with these children, using them as the primary facilitators to improve their reading skills will be the most effective approach. This study aimed to identify the efficacy of a two and half a day of intensive training provided to fifteen mainstream government school teachers of grade two classes. The goal of the training was to enhance their knowledge of dyslexia and provide full classroom skills training that could be used to support the development of the students’ reading competencies. A closed closed-ended multiple choice questionnaire was given to these teachers pre and -post-training to measure teachers’ knowledge of dyslexia, the areas in which these children needed additional support, and the best strategies to facilitate reading competencies. The data revealed that the teachers’ knowledge in all areas was significantly poorer prior to the training and that there was a clear improvement in all areas after the training. The gain in target areas of teaching skills selected to improve the reading skills of children was evaluated through peer feedback. Teachers were assigned to three groups and expected to model how they were going to introduce the skills in recommended areas using researcher developed, validated and reliability reliability-tested materials and the strategies which were introduced during the training within the given tasks. Peers and the primary investigator rated teachers’ performances and gave feedback on organizational skills, presentation skills of materials, clarity of instruction, and appropriateness of vocabulary. After modifying their skills according to the feedback the teachers received, they were expected to modify and represent the same tasks to the group the following day. Their skills were re-evaluated by the peers and primary investigator using the same rubrics to measure the improvement. The findings revealed a significant improvement in their teaching skills development. The data analysis of both knowledge and skills gains of the teachers was carried out using quantitative descriptive data analysis. The overall findings of the study yielded promising results that support intensive training as a method for improving teachers’ knowledge and teaching skill development for use with children in a whole class intervention setting who are at risk of dyslexia.Keywords: Dyslexia, knowledge, teaching skills, training program
Procedia PDF Downloads 73412 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires
Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan
Abstract:
Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel
Procedia PDF Downloads 406411 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5
Abstract:
Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is the generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups, and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contains four obvious stages, and the main decomposition reaction occurred in the range of 200-600°C. The Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2, and 3 were in the range of 6.67-20.37 kJ for SS; 1.51-6.87 kJ for HZSM5; and 2.29-9.17 kJ for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1, and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with AC and HZSM5 were in the total range of C4-C17, with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds, while with the presence of AC and HZSM5 dropped to 13.02% and 7.3%, respectively. Meanwhile, the generation of benzene, toluene, and xylene (BTX) compounds was significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR, and TGA techniques. Overall, this research demonstrated AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.Keywords: catalytic pyrolysis, sewage sludge, activated char, HZSM5, bio-oil
Procedia PDF Downloads 179410 Tribological Behaviour of the Degradation Process of Additive Manufactured Stainless Steel 316L
Authors: Yunhan Zhang, Xiaopeng Li, Zhongxiao Peng
Abstract:
Additive manufacturing (AM) possesses several key characteristics, including high design freedom, energy-efficient manufacturing process, reduced material waste, high resolution of finished products, and excellent performance of finished products. These advantages have garnered widespread attention and fueled rapid development in recent decades. AM has significantly broadened the spectrum of available materials in the manufacturing industry and is gradually replacing some traditionally manufactured parts. Similar to components produced via traditional methods, products manufactured through AM are susceptible to degradation caused by wear during their service life. Given the prevalence of 316L stainless steel (SS) parts and the limited research on the tribological behavior of 316L SS samples or products fabricated using AM technology, this study aims to investigate the degradation process and wear mechanisms of 316L SS disks fabricated using AM technology. The wear mechanisms and tribological performance of these AM-manufactured samples are compared with commercial 316L SS samples made using conventional methods. Additionally, methods to enhance the tribological performance of additive-manufactured SS samples are explored. Four disk samples with a diameter of 75 mm and a thickness of 10 mm are prepared. Two of them (Group A) are prepared from a purchased SS bar using a milling method. The other two disks (Group B), with the same dimensions, are made of Gas Atomized 316L Stainless Steel (size range: 15-45 µm) purchased from Carpenter Additive and produced using Laser Powder Bed Fusion (LPBF). Pin-on-disk tests are conducted on these disks, which have similar surface roughness and hardness levels. Multiple tests are carried out under various operating conditions, including varying loads and/or speeds, and the friction coefficients are measured during these tests. In addition, the evolution of the surface degradation processes is monitored by creating moulds of the wear tracks and quantitatively analyzing the surface morphologies of the mould images. This analysis involves quantifying the depth and width of the wear tracks and analyzing the wear debris generated during the wear processes. The wear mechanisms and wear performance of these two groups of SS samples are compared. The effects of load and speed on the friction coefficient and wear rate are investigated. The ultimate goal is to gain a better understanding of the surface degradation of additive-manufactured SS samples. This knowledge is crucial for enhancing their anti-wear performance and extending their service life.Keywords: degradation process, additive manufacturing, stainless steel, surface features
Procedia PDF Downloads 79409 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 128408 Ecofriendly Synthesis of Au-Ag@AgCl Nanocomposites and Their Catalytic Activity on Multicomponent Domino Annulation-Aromatization for Quinoline Synthesis
Authors: Kanti Sapkota, Do Hyun Lee, Sung Soo Han
Abstract:
Nanocomposites have been widely used in various fields such as electronics, catalysis, and in chemical, biological, biomedical and optical fields. They display broad biomedical properties like antidiabetic, anticancer, antioxidant, antimicrobial and antibacterial activities. Moreover, nanomaterials have been used for wastewater treatment. Particularly, bimetallic hybrid nanocomposites exhibit unique features as compared to their monometallic components. Hybrid nanomaterials not only afford the multifunctionality endowed by their constituents but can also show synergistic properties. In addition, these hybrid nanomaterials have noteworthy catalytic and optical properties. Notably, Au−Ag based nanoparticles can be employed in sensor and catalysis due to their characteristic composition-tunable plasmonic properties. Due to their importance and usefulness, various efforts were developed for their preparation. Generally, chemical methods have been described to synthesize such bimetallic nanocomposites. In such chemical synthesis, harmful and hazardous chemicals cause environmental contamination and increase toxicity levels. Therefore, ecologically benevolent processes for the synthesis of nanomaterials are highly desirable to diminish such environmental and safety concerns. In this regard, here we disclose a simple, cost-effective, external additive free and eco-friendly method for the synthesis of Au-Ag@AgCl nanocomposites using Nephrolepis cordifolia root extract. Au-Ag@AgCl NCs were obtained by the simultaneous reduction of cationic Ag and Au into AgCl in the presence of plant extract. The particle size of 10 to 50 nm was observed with the average diameter of 30 nm. The synthesized nanocomposite was characterized by various modern characterization techniques. For example, UV−visible spectroscopy was used to determine the optical activity of the synthesized NCs, and Fourier transform infrared (FT-IR) spectroscopy was employed to investigate the functional groups present in the biomolecules that were responsible for both reducing and capping agents during the formation of nanocomposites. Similarly, powder X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA) and energy-dispersive X-ray (EDX) spectroscopy were used to determine crystallinity, size, oxidation states, thermal stability and weight loss of the synthesized nanocomposites. As a synthetic application, the synthesized nanocomposite exhibited excellent catalytic activity for the multicomponent synthesis of biologically interesting quinoline molecules via domino annulation-aromatization reaction of aniline, arylaldehyde, and phenyl acetylene derivatives. Interestingly, the nanocatalyst was efficiently recycled for five times without substantial loss of catalytic properties.Keywords: nanoparticles, catalysis, multicomponent, quinoline
Procedia PDF Downloads 128407 HRCT of the Chest and the Role of Artificial Intelligence in the Evaluation of Patients with COVID-19
Authors: Parisa Mansour
Abstract:
Introduction: Early diagnosis of coronavirus disease (COVID-19) is extremely important to isolate and treat patients in time, thus preventing the spread of the disease, improving prognosis and reducing mortality. High-resolution computed tomography (HRCT) chest imaging and artificial intelligence (AI)-based analysis of HRCT chest images can play a central role in the treatment of patients with COVID-19. Objective: To investigate different chest HRCT findings in different stages of COVID-19 pneumonia and to evaluate the potential role of artificial intelligence in the quantitative assessment of lung parenchymal involvement in COVID-19 pneumonia. Materials and Methods: This retrospective observational study was conducted between May 1, 2020 and August 13, 2020. The study included 2169 patients with COVID-19 who underwent chest HRCT. HRCT images showed the presence and distribution of lesions such as: ground glass opacity (GGO), compaction, and any special patterns such as septal thickening, inverted halo, mark, etc. HRCT findings of the breast at different stages of the disease (early: andlt) 5 days, intermediate: 6-10 days and late stage: >10 days). A CT severity score (CTSS) was calculated based on the extent of lung involvement on HRCT, which was then correlated with clinical disease severity. Use of artificial intelligence; Analysis of CT pneumonia and quot; An algorithm was used to quantify the extent of pulmonary involvement by calculating the percentage of pulmonary opacity (PO) and gross opacity (PHO). Depending on the type of variables, statistically significant tests such as chi-square, analysis of variance (ANOVA) and post hoc tests were applied when appropriate. Results: Radiological findings were observed in HRCT chest in 1438 patients. A typical pattern of COVID-19 pneumonia, i.e., bilateral peripheral GGO with or without consolidation, was observed in 846 patients. About 294 asymptomatic patients were radiologically positive. Chest HRCT in the early stages of the disease mostly showed GGO. The late stage was indicated by such features as retinal enlargement, thickening and the presence of fibrous bands. Approximately 91.3% of cases with a CTSS = 7 were asymptomatic or clinically mild, while 81.2% of cases with a score = 15 were clinically severe. Mean PO and PHO (30.1 ± 28.0 and 8.4 ± 10.4, respectively) were significantly higher in the clinically severe categories. Conclusion: Because COVID-19 pneumonia progresses rapidly, radiologists and physicians should become familiar with typical TC chest findings to treat patients early, ultimately improving prognosis and reducing mortality. Artificial intelligence can be a valuable tool in treating patients with COVID-19.Keywords: chest, HRCT, covid-19, artificial intelligence, chest HRCT
Procedia PDF Downloads 63406 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 147405 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 393404 Chatbots and the Future of Globalization: Implications of Businesses and Consumers
Authors: Shoury Gupta
Abstract:
Chatbots are a rapidly growing technological trend that has revolutionized the way businesses interact with their customers. With the advancements in artificial intelligence, chatbots can now mimic human-like conversations and provide instant and efficient responses to customer inquiries. In this research paper, we aim to explore the implications of chatbots on the future of globalization for both businesses and consumers. The paper begins by providing an overview of the current state of chatbots in the global market and their growth potential in the future. The focus is on how chatbots have become a valuable tool for businesses looking to expand their global reach, especially in areas with high population density and language barriers. With chatbots, businesses can engage with customers in different languages and provide 24/7 customer service support, creating a more accessible and convenient customer experience. The paper then examines the impact of chatbots on cross-cultural communication and how they can help bridge communication gaps between businesses and consumers from different cultural backgrounds. Chatbots can potentially facilitate cross-cultural communication by offering real-time translations, voice recognition, and other innovative features that can help users communicate effectively across different languages and cultures. By providing more accessible and inclusive communication channels, chatbots can help businesses reach new markets and expand their customer base, making them more competitive in the global market. However, the paper also acknowledges that there are potential drawbacks associated with chatbots. For instance, chatbots may not be able to address complex customer inquiries that require human input. Additionally, chatbots may perpetuate biases if they are programmed with certain stereotypes or assumptions about different cultures. These drawbacks may have significant implications for businesses and consumers alike. To explore the implications of chatbots on the future of globalization in greater detail, the paper provides a thorough review of existing literature and case studies. The review covers topics such as the benefits of chatbots for businesses and consumers, the potential drawbacks of chatbots, and how businesses can mitigate any risks associated with chatbot use. The paper also discusses the ethical considerations associated with chatbot use, such as privacy concerns and the need to ensure that chatbots do not discriminate against certain groups of people. The ethical implications of chatbots are particularly important given the potential for chatbots to be used in sensitive areas such as healthcare and financial services. Overall, this research paper provides a comprehensive analysis of chatbots and their implications for the future of globalization. By exploring both the potential benefits and drawbacks of chatbot use, the paper aims to provide insights into how businesses and consumers can leverage this technology to achieve greater global reach and improve cross-cultural communication. Ultimately, the paper concludes that chatbots have the potential to be a powerful tool for businesses looking to expand their global footprint and improve their customer experience, but that care must be taken to mitigate any risks associated with their use.Keywords: chatbots, conversational AI, globalization, businesses
Procedia PDF Downloads 97403 Thinking for Writing: Evidence of Language Transfer in Chinese ESL Learners’ Written Narratives
Abstract:
English as a second language (ESL) learners are often observed to have transferred traits of their first languages (L1) and habits of using their L1s to their use of English (second language, L2), and this phenomenon is coined as language transfer. In addition to the transfer of linguistic features (e.g., grammar, vocabulary, etc.), which are relatively easy to observe and quantify, many cross-cultural theorists emphasized on a much subtle and fundamental transfer existing on a higher conceptual level that is referred to as conceptual transfer. Although a growing body of literature in linguistics has demonstrated evidence of L1 transfer in various discourse genres, very limited studies address the underlying conceptual transfer that is happening along with the language transfer, especially with the extended form of spontaneous discourses such as personal narrative. To address this issue, this study situates itself in the context of Chinese ESL learners’ written narratives, examines evidence of L1 conceptual transfer in comparison with native English speakers’ narratives, and provides discussion from the perspective of the conceptual transfer. It is hypothesized that Chinese ESL learners’ English narrative strategies are heavily influenced by the strategies that they use in Chinese as a result of the conceptual transfer. Understanding language transfer cognitively is of great significance in the realm of SLA, as it helps address challenges that ESL learners around the world are facing; allow native English speakers to develop a better understanding about how and why learners’ English is different; and also shed light in ESL pedagogy by providing linguistic and cultural expectations in native English-speaking countries. To achieve the goals, 40 college students were recruited (20 Chinese ESL learners and 20 native English speakers) in the United States, and their written narratives on the prompt 'The most frightening experience' were collected for quantitative discourse analysis. 40 written narratives (20 in Chinese and 20 in English) were collected from Chinese ESL learners, and 20 written narratives were collected from native English speakers. All written narratives were coded according to the coding scheme developed by the authors prior to data collection. Statistical descriptive analyses were conducted, and the preliminary results revealed that native English speakers included more narrative elements such as events and explicit evaluation comparing to Chinese ESL students’ both English and Chinese writings; the English group also utilized more evaluation device (i.e., physical state expressions, indirectly reported speeches, delineation) than Chinese ESL students’ both English and Chinese writings. It was also observed that Chinese ESL students included more orientation elements (i.e., the introduction of time/place, the introduction of character) in their Chinese and English writings than the native English-speaking participants. The findings suggest that a similar narrative strategy was observed in Chinese ESL learners’ Chinese narratives and English narratives, which is considered as the evidence of conceptual transfer from Chinese (L1) to English (L2). The results also indicate that distinct narrative strategies were used by Chinese ESL learners and native English speakers as a result of cross-cultural differences.Keywords: Chinese ESL learners, language transfer, thinking-for-speaking, written narratives
Procedia PDF Downloads 118402 Diurnal Circle of Rainfall and Convective Properties over West and Central Africa
Authors: Balogun R. Ayodeji, Adefisan E. Adesanya, Adeyewa Z. Debo, E. C. Okogbue
Abstract:
The need to investigate diurnal weather circles in West Africa is coined in the fact that complex interactions often results from diurnal weather patterns. This study investigates diurnal circles of wind, rainfall and convective properties using six (6) hour interval data from the ERA-Interim and the Tropical Rainfall Measurement Mission (TRMM). The seven distinct zones, used in this work and classified as rainforest (west-coast, dry, Nigeria-Cameroon), Savannah (Nigeria, and Central Africa and South Sudan (CASS)), Sudano-Sahel, and Sahel, were clearly indicated by the rainfall pattern in each zones. Results showed that the land‐ocean warming contrast was more strongly sensitive to seasonal cycle and has been very weak during March-May (MAM) but clearly spelt out during June-September (JJAS). Dipoles of wind convergence/divergence and wet/dry precipitation, between CASS and Nigeria Savannah zones, were identified in morning and evening hours of MAM, whereas distinct night and day anomaly, in the same location of CASS, were found to be consistent during the JJAS season. Diurnal variation of convective properties showed that stratiform precipitation, due to the extremely low occurrence of flashcount climatology, was dominant during morning hours for both MAM and JJAS than other periods of the day. On the other hand, diurnal variation of the system sizes showed that small system sizes were most dominant during the day time periods for both MAM and JJAS, whereas larger system sizes were frequent during the evening, night, and morning hours. The locations of flashcount and system sizes agreed with earlier results that morning and day-time hours were dominated by stratiform precipitation and small system sizes respectively. Most results clearly showed that the eastern locations of Sudano and Sahel were consistently dry because rainfall and precipitation features were predominantly few. System sizes greater than or equal to 800 km² were found in the western axis of the Sudano and Sahel zones, whereas the eastern axis, particularly in the Sahel zone, had minimal occurrences of small/large system sizes. From the results of locations of extreme systems, flashcount greater than 275 in one single system was never observed during the morning (6Z) diurnal, whereas, the evening (18Z) diurnal had the most frequent cases (at least 8) of flashcount exceeding 275 in one single system. Results presented had shown the importance of diurnal variation in understanding precipitation, flashcount, system sizes patterns at diurnal scales, and understanding land-ocean contrast, precipitation, and wind field anomaly at diurnal scales.Keywords: convective properties, diurnal circle, flashcount, system sizes
Procedia PDF Downloads 132401 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron
Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava
Abstract:
Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation
Procedia PDF Downloads 123400 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide
Authors: Michalina Aniola, Andrzej Katrusiak
Abstract:
4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.Keywords: conformation, high-pressure, negative area compressibility, polymorphism
Procedia PDF Downloads 246399 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 150398 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68397 Marketization of Higher Education in the UK and Its Impacts on Teaching Practitioners
Authors: Hossein Rezaie
Abstract:
Academic institutions, esp. universities, have been known as cradles of learning and teaching great thinkers while creating the type of knowledge that is supposed to be bereft of utilitarian motives. Nonetheless, it seems that such intellectual centers have entered into a competition with each other for attracting the attention of potential clients. The traditional values of (higher) education such as nurturing criticality and fostering intellectuality in students have been replaced with strategic planning, quality assurance, performance assessment, and academic audits. Not being immune from the whims and wishes of marketization, the system of higher education in the UK has been recalibrated by policy makers to address the demand and supply of student education, academic research and other university activities on the basis of monetary factors. As an immediate example in this vein, the Russell Group in the UK, which is comprised of 24 leading UK research universities, has explicitly expressed it policy on its official website as follows: ‘Russell Group universities are global businesses competing for staff, students and funding with the best in the world’. Furthermore, certain attempts have been made to corporatize the system of HE which have been manifested in remodeling of university governing bodies on corporate lines and developing measurement scales for indicating the performance of teaching practitioners. Nevertheless, it seems that such structural changes in policies toward the system of HE have bearing on the practices of practitioners and educators as well as the identity of students who are the customers of educational services. The effects of marketization have been examined mainly in terms of students’ perceptions and motivation, institutional policies and university management. However, the teaching practitioner side seems to be an under-studied area with regard to any changes in its expectations, satisfaction and perception of professional identity in the aftermath of introducing market-wise values into HE of the UK. As a result, this research aims to investigate the possible outcomes of market-driven values on the practitioner side of HE in the UK and finally seeks to address the following research questions: 1-How is the change in the mission of HE in the UK reflected in institutional documents? 1-A- How is the change of mission represented in job adverts? 1-B- How is the change of mission represented in university prospectuses? 2-How are teaching practitioners represented regarding their roles and obligations in the prospectuses and job ads published by UK HE institutions? In order to address these questions, the researcher will analyze 30 prospectuses and job ads published by Russel Group universities by taking Critical Discourse Analysis as his point of departure and the analytical methods of genre analysis and Systemic Functional Linguistics to probe into the generic features and representation of participants, in this case teaching practitioners, in the selected corpus.Keywords: higher education, job advertisements, marketization of higher education, prospectuses
Procedia PDF Downloads 247396 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts
Authors: Baran Johansson
Abstract:
This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal
Procedia PDF Downloads 70395 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 280394 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa
Authors: A. Van Staden, A. Tolmie, E. Vorster
Abstract:
Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy
Procedia PDF Downloads 359393 Variation of Biologically Active Compounds and Antioxidancy in the Process of Blueberry Storage
Authors: Meri Khakhutaishvili, Indira Djaparidze, Maia Vanidze, Aleko Kalandia
Abstract:
Cultivation of blueberry in Georgia started in 21st century. There are more than 20 species of blueberry cultivated in this region from all other the world. The species are mostly planted on acidic soil, previously occupied by tea plantations. Many of the plantations have pretty good yield. It is known that changing the location of a plant to a new soil or climate effects chemical compositions of the plant. However, even though these plants are brought from other countries, no research has been conducted to fully examine the blueberry fruit cultivated in Georgia. Shota Rustaveli National Science Foundation Grant FR/335/10-160/14, gave us an opportunity to continue our previous works and conduct research on several berries, among them of course the chemical composition of stored Blueberry. We were able to conduct the first study that included examining qualitative and quantitative features of bioactive compounds in Georgian Blueberry. This experiments were held in the ‘West Georgia Regional Chromatography center’ (Grant AP/96/13) of our university, that is equipped with modern equipment like HPLC UV-Vis, RI-detector, HPLC-conductivity detector, UPLC-MS-detector. Biochemical analysis was conducted using different physico-chemical and instrumental methods. Separation-identification and quantitative analysis were conducted using UPLC-MS (Waters Acquity QDa detector), HPLC (Waters Brceze 1525, UV-Vis 2489 detectors), pH-meters (Mettler Toledo). Refractrometer -Misco , Spectrometer –Cuvette Changer (Mettler Toledo UV5A), C18 Cartridge Solid Phase Extraction (SPE) Waters Sep-Pak C18 (500 mg), Chemicals – stability radical- 2,2-Diphenil-1-picrilhydrazyl (Aldrich-germany), Acetonitrile, Methanol, Acetic Acid (Merck-Germany), AlCl3, Folin Ciocalteu reagent (preparation), Standarts –Callic acid, Quercetin. Carbohydrate HPLC-RI analysis used systems acetonitrile-water (80-20). UPLC-MS analysis used systems- solvent A- Water +1 % acetic acid და solvent -B Methanol +1% acetic acid). It was concluded that the amount of sugars was in range of 5-9 %, mostly glucose and fructose. Also, the amount of organic acids was 0.2-1.2% most of which was malic and citric acid. Anthocians were also present in the sample 200-550mg/100g. We were able to identify up to 15 different compounds, most of which were products of delphinidine and cyanide. All species have high antioxidant level(DPPH). By rapidly freezing the sample and then keeping it in specific conditions allowed us to keep the sample for 12 months.Keywords: antioxidants, bioactive, blueberry, storage
Procedia PDF Downloads 212392 Relationship between Institutional Perspective and Safety Performance: A Case on Ready-Made Garments Manufacturing Industry
Authors: Fahad Ibrahim, Raphaël Akamavi
Abstract:
Bangladesh has encountered several industrial disasters (e.g. fire and building collapse tragedies) leading to the loss of valuable human lives. Irrespective of various institutions’ making effort to improve the safety situation, industry compliance and safety behaviour have not yet been improved. Hence, one question remains, to what extent does the institutional elements efficient enough to make any difference in improving safety behaviours? Thus, this study explores the relationship between institutional perspective and safety performance. Structural equation modelling results, using survey data from 256 RMG workers’ of 128 garments manufacturing factories in Bangladesh, show that institutional facets strongly influence management safety commitment to induce workers participation in safety activities and reduce workplace accident rates. The study also found that by upholding industrial standards and inspecting the safety situations, institutions facets significantly and directly affect workers involvement in safety participations and rate of workplace accidents. Additionally, workers involvement to safety practices significantly predicts the safety environment of the workplace. Subsequently, our findings demonstrate that institutional culture, norms, and regulations enact play an important role in altering management commitment to set-up a safer workplace environment. As a result, when workers’ perceive their management having high level of commitment to safety, they are inspired to be involved more in the safety practices, which significantly alter the workplace safety situation and lessen injury experiences. Due to the fact that institutions have strong influence on management commitment, legislative members should endorse, regulate, and strictly monitor workplace safety laws to be exercised by the factory owners. Further, management should take initiatives for adopting OHS features and conceive strategic directions (i.e., set up safety committees, risk assessments, innovative training) for promoting a positive safety climate to provide a safe workplace environment. Arguably, an inclusive public-private partnership is recommended for ensuring better and safer workplace for RMG workers. However, as our data were under a cross-sectional design; the respondents’ perceptions might get changed over a period of time and hence, a longitudinal study is recommended. Finally, further research is needed to determine the impact of improvement mechanisms on workplace safety performance, such as how workplace design, safety training programs, and institutional enforcement policies protect the well-being of workers.Keywords: institutional perspective, management commitment, safety participation, work injury, safety performance, occupational health and safety
Procedia PDF Downloads 206391 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231390 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique
Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi
Abstract:
The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)
Procedia PDF Downloads 195389 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332388 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 61387 Blood Microbiome in Different Metabolic Types of Obesity
Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Dilyara R. Khusnutdinova, Dilyara R. Kamaldinova, Alexander V. Shestopalov
Abstract:
Background. Obese patients have unequal risks of metabolic disorders. It is accepted to distinguish between metabolically healthy obesity (MHO) and metabolically unhealthy obesity (MUHO). MUHO patients have a high risk of metabolic disorders, insulin resistance, and diabetes mellitus. Among the other things, the gut microbiota also contributes to the development of metabolic disorders in obesity. Obesity is accompanied by significant changes in the gut microbial community. In turn, bacterial translocation from the intestine is the basis for the blood microbiome formation. The aim was to study the features of the blood microbiome in patients with various metabolic types of obesity. Patients, materials, methods. The study included 116 healthy donors and 101 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=36) and MUHO (n=53). Quantitative and qualitative assessment of the blood microbiome was based on metagenomic analysis. Blood samples were used to isolate DNA and perform sequencing of the variable v3-v4 region of the 16S rRNA gene. Alpha diversity indices (Simpson index, Shannon index, Chao1 index, phylogenetic diversity, the number of observed operational taxonomic units) were calculated. Moreover, we compared taxa (phyla, classes, orders, and families) in terms of isolation frequency and the taxon share in the total bacterial DNA pool between different patient groups. Results. In patients with MHO, the characteristics of the alpha-diversity of the blood microbiome were like those of healthy donors. However, MUHO was associated with an increase in all diversity indices. The main phyla of the blood microbiome were Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria. Cyanobacteria, TM7, Thermi, Verrucomicrobia, Chloroflexi, Acidobacteria, Planctomycetes, Gemmatimonadetes, and Tenericutes were found to be less significant phyla of the blood microbiome. Phyla Acidobacteria, TM7, and Verrucomicrobia were more often isolated in blood samples of patients with MUHO compared with healthy donors. Obese patients had a decrease in some taxonomic ranks (Bacilli, Caulobacteraceae, Barnesiellaceae, Rikenellaceae, Williamsiaceae). These changes appear to be related to the increased diversity of the blood microbiome observed in obesity. An increase of Lachnospiraceae, Succinivibrionaceae, Prevotellaceae, and S24-7 was noted for MUHO patients, which, apparently, is explained by a magnification in intestinal permeability. Conclusion. Blood microbiome differs in obese patients and healthy donors at class, order, and family levels. Moreover, the nature of the changes is determined by the metabolic type of obesity. MUHO linked to increased diversity of the blood microbiome. This appears to be due to increased microbial translocation from the intestine and non-intestinal sources.Keywords: blood microbiome, blood bacterial DNA, obesity, metabolically healthy obesity, metabolically unhealthy obesity
Procedia PDF Downloads 164386 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities
Authors: Hind Alshoubaki
Abstract:
The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’Keywords: refugee, refugee camp, temporary, Zaatari
Procedia PDF Downloads 133