Search results for: computer operating principle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5567

Search results for: computer operating principle

1847 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 205
1846 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique

Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck

Abstract:

In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.

Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking

Procedia PDF Downloads 78
1845 Research on the Environmental Assessment Index of Brownfield Redevelopment in Taiwan: A Case Study on Formosa Chemicals and Fibre Corporation, Changhua Branch

Authors: Min-Chih Yang, Shih-Jen Feng, Bo-Tsang Li

Abstract:

The concept of “Brownfield” has been developed for nearly 35 years since it was put forward in 《Comprehensive Environmental Response, Compensation, and Liability Act, CERCLA》of USA in 1980 for solving the problem of soil contamination of those old industrial lands, and later, many countries have put forward relevant policies and researches continuously. But the related concept in Taiwan, a country has developed its industry for 60 years, is still in its infancy. This leads to the slow development of Brownfield related research and policy in Taiwan. When it comes to build the foundation of Brownfield development, we have to depend on the related experience and research of other countries. They are four aspects about Brownfield: 1. Contaminated Land; 2. Derelict Land; 3. Vacant Land; 4. Previously Development Land. This study will focus on and deeply investigate the Vacant land and contaminated land. The subject of this study is Formosa Chemicals & Fibre Corporation, Changhua branch in Taiwan. It has been operating for nearly 50 years and contributing a lot to the local economy. But under the influence of the toxic waste and sewage which was drained regularly or occasionally out from the factory, the environment has been destroyed seriously. There are three factors of pollution: 1. environmental toxicants, carbon disulfide, released from producing processes and volatile gases which is hard to monitor; 2. Waste and exhaust gas leakage caused by outdated equipment; 3. the wastewater discharge has seriously damage the ecological environment of the Dadu river estuary. Because of all these bad influences, the factory has been closed nowadays and moved to other places to spare the opportunities for the contaminated lands to re-develop. So we collect information about related Brownfield management experience and policies in different countries as background information to investigate the current Taiwanese Brownfield redevelopment issues and built the environmental assessment framework for it. We hope that we can set the environmental assessment indexes for Formosa Chemicals & Fibre Corporation, Changhua branch according to the framework. By investigating the theory and environmental pollution factors, we will carry out deep analysis and expert questionnaire to set those indexes and prove a sample in Taiwan for Brownfield redevelopment and remediation in the future.

Keywords: brownfield, industrial land, redevelopment, assessment index

Procedia PDF Downloads 396
1844 Impact of Electric Vehicles on Energy Consumption and Environment

Authors: Amela Ajanovic, Reinhard Haas

Abstract:

Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.

Keywords: costs, mobility, policy, sustainability,

Procedia PDF Downloads 217
1843 The Investigation of Work Stress and Burnout in Nurse Anesthetists: A Cross-Sectional Study

Authors: Yen Ling Liu, Shu-Fen Wu, Chen-Fuh Lam, I-Ling Tsai, Chia-Yu Chen

Abstract:

Purpose: Nurse anesthetists are confronting extraordinarily high job stress in their daily practice, deriving from the fast-track anesthesia care, risk of perioperative complications, routine rotating shifts, teaching programs and interactions with the surgical team in the operating room. This study investigated the influence of work stress on the burnout and turnover intention of nurse anesthetists in a regional general hospital in Southern Taiwan. Methods: This was a descriptive correlational study carried out in 66 full-time nurse anesthetists. Data was collected from March 2017 to June 2017 by in-person interview, and a self-administered structured questionnaire was completed by the interviewee. Outcome measurements included the Practice Environment Scale of the Nursing Work Index (PES-NWI), Maslach Burnout Inventory (MBI) and nursing staff turnover intention. Numerical data were analyzed by descriptive statistics, independent t test, or one-way ANOVA. Categorical data were compared using the chi-square test (x²). Datasets were computed with Pearson product-moment correlation and linear regression. Data were analyzed by using SPSS 20.0 software. Results: The average score for job burnout was 68.7916.67 (out of 100). The three major components of burnout, including emotional depletion (mean score of 26.32), depersonalization (mean score of 13.65), and personal(mean score of 24.48). These average scores suggested that these nurse anesthetists were at high risk of burnout and inversely correlated with turnover intention (t = -4.048, P < 0.05). Using linear regression model, emotional exhaustion and depersonalization were the two independent factors that predicted turnover intention in the nurse anesthetists (19.1% in total variance). Conclusion/Implications for Practice: The study identifies that the high risk of job burnout in the nurse anesthetists is not simply derived from physical overload, but most likely resulted from the additional emotional and psychological stress. The occurrence of job burnout may affect the quality of nursing work, and also influence family harmony, in turn, may increase the turnover rate. Multimodal approach is warranted to reduce work stress and job burnout in nurse anesthetists to enhance their willingness to contribute in anesthesia care.

Keywords: anesthesia nurses, burnout, job, turnover intention

Procedia PDF Downloads 288
1842 Extended Boolean Petri Nets Generating N-Ary Trees

Authors: Riddhi Jangid, Gajendra Pratap Singh

Abstract:

Petri nets, a mathematical tool, is used for modeling in different areas of computer sciences, biological networks, chemical systems and many other disciplines. A Petri net model of a given system is created by the graphical representation that describes the properties and behavior of the system. While looking for the behavior of any system, 1-safe Petri nets are of particular interest to many in the application part. Boolean Petri nets correspond to those class in 1- safe Petri nets that generate all the binary n-vectors in their reachability analysis. We study the class by changing different parameters like the token counts in the places and how the structure of the tree changes in the reachability analysis. We discuss here an extended class of Boolean Petri nets that generates n-ary trees in their reachability-based analysis.

Keywords: marking vector, n-vector, petri nets, reachability

Procedia PDF Downloads 74
1841 Identifying Diabetic Retinopathy Complication by Predictive Techniques in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Predicting the risk of diabetic retinopathy (DR) in Indian type 2 diabetes patients is immensely necessary. India, being the second largest country after China in terms of a number of diabetic patients, to the best of our knowledge not a single risk score for complications has ever been investigated. Diabetic retinopathy is a serious complication and is the topmost reason for visual impairment across countries. Any type or form of DR has been taken as the event of interest, be it mild, back, grade I, II, III, and IV DR. A sample was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of DR. Cox proportional hazard regression is used to design risk scores for the prediction of retinopathy. Model calibration and discrimination are assessed from Hosmer Lemeshow and area under receiver operating characteristic curve (ROC). Overfitting and underfitting of the model are checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Optimal cut off point is chosen by Youden’s index. Five-year probability of DR is predicted by both survival function, and Markov chain two state model and the better technique is concluded. The risk scores developed can be applied by doctors and patients themselves for self evaluation. Furthermore, the five-year probabilities can be applied as well to forecast and maintain the condition of patients. This provides immense benefit in real application of DR prediction in T2DM.

Keywords: Cox proportional hazard regression, diabetic retinopathy, ROC curve, type 2 diabetes mellitus

Procedia PDF Downloads 177
1840 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 187
1839 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 416
1838 A Study of Different Retail Models That Penetrates South African Townships

Authors: Beaula, M. Kruger, Silindisipho, T. Belot

Abstract:

Small informal retailers are considered one of the most important features of developing countries around the world. Those small informal retailers form part of the local communities in South African townships and are estimated to be more than 100,000 across the country. The township economic landscape has changed over time in South Africa. The traditional small informal retailers in South African Townships have been faced with numerous challenges of increasing competition; an increase in the number of local retail shops and foreign-owned shops. There is evidence that the South African personal and disposable income has increased amongst black African consumers. Historically, people residing in townships were restricted to informal retail shops; however, this has changed due to the growing number of formal large retail chains entering into the township market. The larger retail chains are aware of the improved income levels of the middle-income townships residence and as a result, larger retailers have followed certain strategies such as; (1) retail format development; (2) diversification growth strategy; (3) market penetration growth strategy and (4) market expansion. This research did a comparative analysis between the different retail models developed by Pick n Pay, Spar and Shoprite. The research methodology employed for this study was of a qualitative nature and made use of a case study to conduct a comparative analysis between larger retailers. A questionnaire was also designed to obtain data from existing smaller retailers. The study found that larger retailers have developed smaller retail formats to compete with the traditional smaller retailers operating in South African townships. Only one out of the two large retailers offers entrepreneurs a franchise model. One of the big retailers offers the opportunity to employ between 15 to 20 employees while the others are subject to the outcome of a feasibility study. The response obtained from the entrepreneurs in the townships were mixed, while some found their presence as having a “negative impact,” which has increased competition; others saw them as a means to obtain a variety of products. This research found that the most beneficial retail model for both bigger retail and existing and new entrepreneurs are from Pick n Pay. The other retail format models are more beneficial for the bigger retailers and not to new and existing entrepreneurs.

Keywords: Pick n Pay, retailers, shoprite, spar, townships

Procedia PDF Downloads 189
1837 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 34
1836 Persuading ICT Consumers to Disconnect from Work: An Experimental Study on the Influence of Message Frame, Regulatory Focus, Ad Believability and Attitude toward the Ad on Message Effectiveness

Authors: Katharina Ninaus, Ralf Terlutter, Sandra Diehl

Abstract:

Information and communication technologies (ICT) have become pervasive in all areas of modern life, both in work and leisure. Technological developments and particularly the ubiquity of smartphones have made it possible for ICT consumers to be constantly connected to work, fostering an always-on mentality and increasing the pressure to be accessible at all times. However, performing work tasks outside of working hours using ICT results in a lack of mental detachment and recovery from work. It is, therefore, necessary to develop effective behavioral interventions to increase risk awareness of a constant connection to the workplace in the employed population. Drawing on regulatory focus theory, this study aims to investigate the persuasiveness of tailoring messages to individuals’ chronic regulatory focus in order to encourage ICT consumers to set boundaries by defining fixed times for professional accessibility outside of working hours in order to contribute to the well-being of ICT consumers with high ICT involvement in their work life. The experimental study examines the interaction effect between consumers’ chronic regulatory focus (i.e. promotion focus versus prevention focus) and positive or negative message framing (i.e. gain frame versus loss frame) on consumers’ intention to perform the advocated behavior. Based on the assumption that congruent messages create regulatory fit and increase message effectiveness, it is hypothesized that behavioral intention will be higher in the condition of regulatory fit compared to regulatory non-fit. It is further hypothesized that ad believability and attitude toward the ad will mediate the effect of regulatory fit on behavioral intention given that ad believability and ad attitude both determine consumer behavioral responses. Results confirm that the interaction between regulatory focus and message frame emerged as a predictor of behavioral intention such as that consumers’ intentions to set boundaries by defining fixed times for professional accessibility outside of working hours increased as congruency with their regulatory focus increased. The loss-framed ad was more effective for consumers with a predominant prevention focus, while the gain-framed ad was more effective for consumers with a predominant promotion focus. Ad believability and attitude toward the ad both emerged as predictors of behavioral intention. Mediation analysis revealed that the direct effect of the interaction between regulatory focus and message frame on behavioral intention was no longer significant when including ad believability and ad attitude as mediators in the model, indicating full mediation. However, while the indirect effect through ad believability was significant, the indirect effect through attitude toward the ad was not significant. Hence, regulatory fit increased ad believability, which then increased behavioral intention. Ad believability appears to have a superior effect indicating that behavioral intention does not depend on attitude toward the ad, but it depends on whether or not the ad is perceived as believable. The study shows that the principle of regulatory fit holds true in the context of ICT consumption and responds to calls for more research on mediators of health message framing effects.

Keywords: always-on mentality, Information and communication technologies (ICT) consumption, message framing, regulatory focus

Procedia PDF Downloads 206
1835 The Psycho-Linguistic Aspect of Translation Gaps in Teaching English for Specific Purposes

Authors: Elizaveta Startseva, Elena Notina, Irina Bykova, Valentina Ulyumdzhieva, Natallia Zhabo

Abstract:

With the various existing models of intercultural communication that contain a vast number of stages for foreign language acquisition, there is a need for conscious perception of the foreign culture. Such a process is associated with the emergence of linguistic conflict with the consistent students’ desire to solve the problem of the language differences, along with cultural discrepancies. The aim of this study is to present the modern ways and methods of removing psycholinguistic conflict through skills development in professional translation and intercultural communication. The study was conducted in groups of 1-4-year students of Medical Institute and Agro-Technological Institute RUDN university. In the course of training, students got knowledge in such disciplines as basic grammar and vocabulary of the English language, phonetics, lexicology, introduction to linguistics, theory of translation, annotating and referencing media texts and texts in specialty. The students learned to present their research work, participated in the University and exit conferences with their reports and presentations. Common strategies of removing linguistic and cultural conflict can be attributed to the development of such abilities of a language personality as a commitment to communication and cooperation, the formation of cultural awareness and empathy of other cultures of the individual, realistic self-esteem, emotional stability, tolerance, etc. The process of mastering a foreign language and culture of the target language leads to a reduplication of linguistic identity, which leads to successive formation of the so-called 'secondary linguistic personality.' In our study, we tried to approach the problem comprehensively, focusing on the translation gaps for technical and non-technical language still missing such a typology which could classify all of the lacunas on the same principle. When obtaining the background knowledge, students learn to overcome the difficulties posed by the national-specific and linguistic differences of cultures in contact, i.e., to eliminate the gaps (to fill in and compensate). Compensation gaps is a means of fixing it, the initial phase of elimination, followed in some cases and some not is filling semantic voids (plenus). The concept of plenus occurs in most cases of translation gaps, for example in the transcription and transliteration of (intercultural and exoticism), the replication (reproduction of the morphemic structure of words or idioms. In all the above cases the task of the translator is to ensure an identical response of the receptors of the original and translated texts, since any statement is created with the goal of obtaining communicative effect, and hence pragmatic potential is the most important part of its contents. The practical value of our work lies in improving the methodology of teaching English for specific purposes on the basis of psycholinguistic concept of the secondary language personality.

Keywords: lacuna, language barrier, plenus, secondary language personality

Procedia PDF Downloads 281
1834 The Urgent Quest for an Alliance between the Global North and Global South to Manage the Risk of Refugees and Asylum Seekers

Authors: Mulindwa Gerald

Abstract:

Forced Migration is believed to be the most pressing issue in migration studies today, it therefore makes it of paramount importance that we examine the efficacy of the prevailing laws, treaties, conventions and global policies of refugee management. It suffices to note that the existing policies are vague and ambiguous encouraging the hospitality but not assessing the social economic impact to not only the refugees but also their host communities. The commentary around the Off-shore arrangements like one of UK-Rwanda and the legal implications of the same, make it even more fascinating. These are issues that need to be amplified and captured in the Migration Policies. In Uganda, a small landlocked country in East Africa, there always appeared new faces who were refugees from the Congo and Rwanda the neighboring countries to the West and South West respectively. The refugees would migrate to Uganda with absolutely no idea whatsoever how they were going to meet the daily needs of life, no food, no shelter, no clothing. It interest’s one’s mind to conscientiously interrogate the policy issues surrounding refugee management. The 1951 convention sets a number of obligations to states and the conundrum, faced by citizens of the universe interested in Migration studies is ensuring maximum compliance to these obligations considering the resource challenges. States have a duty to protect refugees in accordance with Article 14 of the Universal Declaration for Human Rights which was adopted by the 1951 convention, these speak to rights like the most important right of refugees known as the Principle of Non-Refoulement, which prohibits expulsion or return of refugees or asylum seekers The International Organization for Migrations projection of the number of migrants globally by 2050 was overwhelmingly surpassed by 2019 due to wars, conflicts that have been experienced in different parts of the globe. This is also due natural calamities and tough economic conditions. It is a descriptive analysis that encompasses a qualitative design research based on a case study involving both desk research and field study. The use of qualitative research approaches like interview guides, document review and direct observation methods helped to bring in the experience, social, behavioral and cultural aspects of the respondents into the study, and since qualitative research uses subjective information and not limited to the rigidly definable variables, thus it helped to explore the research area of the study. it therefore verily believe that this paper is going to trigger perspectives and spark a conversation on this really pressing global issue of refugees and asylum seekers, it is suggesting viable solutions to the management challenges while making recommendations like the ensuring that no refugees or asylum seekers are closed at any borders on the globe for instance a concerted effort of all global players to ensure that refugees are protected efficiently.

Keywords: management, migration, refugees, rights

Procedia PDF Downloads 48
1833 Exploring Deep Neural Network Compression: An Overview

Authors: Ghorab Sara, Meziani Lila, Rubin Harvey Stuart

Abstract:

The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method.

Keywords: model compression, deep neural network, pruning, knowledge distillation, quantization, low-rank decomposition

Procedia PDF Downloads 37
1832 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 121
1831 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 162
1830 A Redesigned Pedagogy in Introductory Programming Reduces Failure and Withdrawal Rates by Half

Authors: Said Fares, Mary Fares

Abstract:

It is well documented that introductory computer programming courses are difficult and that failure rates are high. The aim of this project was to reduce the high failure and withdrawal rates in learning to program. This paper presents a number of changes in module organization and instructional delivery system in teaching CS1. Daily out of class help sessions and tutoring services were applied, interactive lectures and laboratories, online resources, and timely feedback were introduced. Five years of data of 563 students in 21 sections was collected and analyzed. The primary results show that the failure and withdrawal rates were cut by more than half. Student surveys indicate a positive evaluation of the modified instructional approach, overall satisfaction with the course and consequently, higher success and retention rates.

Keywords: failure rate, interactive learning, student engagement, CS1

Procedia PDF Downloads 303
1829 A Genetic-Neural-Network Modeling Approach for Self-Heating in GaN High Electron Mobility Transistors

Authors: Anwar Jarndal

Abstract:

In this paper, a genetic-neural-network (GNN) based large-signal model for GaN HEMTs is presented along with its parameters extraction procedure. The model is easy to construct and implement in CAD software and requires only DC and S-parameter measurements. An improved decomposition technique is used to model self-heating effect. Two GNN models are constructed to simulate isothermal drain current and power dissipation, respectively. The two model are then composed to simulate the drain current. The modeling procedure was applied to a packaged GaN-on-Si HEMT and the developed model is validated by comparing its large-signal simulation with measured data. A very good agreement between the simulation and measurement is obtained.

Keywords: GaN HEMT, computer-aided design and modeling, neural networks, genetic optimization

Procedia PDF Downloads 376
1828 Effect of Signal Acquisition Procedure on Imagined Speech Classification Accuracy

Authors: M.R Asghari Bejestani, Gh. R. Mohammad Khani, V.R. Nafisi

Abstract:

Imagined speech recognition is one of the most interesting approaches to BCI development and a lot of works have been done in this area. Many different experiments have been designed and hundreds of combinations of feature extraction methods and classifiers have been examined. Reported classification accuracies range from the chance level to more than 90%. Based on non-stationary nature of brain signals, we have introduced 3 classification modes according to time difference in inter and intra-class samples. The modes can explain the diversity of reported results and predict the range of expected classification accuracies from the brain signal accusation procedure. In this paper, a few samples are illustrated by inspecting results of some previous works.

Keywords: brain computer interface, silent talk, imagined speech, classification, signal processing

Procedia PDF Downloads 147
1827 Through the Robot’s Eyes: A Comparison of Robot-Piloted, Virtual Reality, and Computer Based Exposure for Fear of Injections

Authors: Bonnie Clough, Tamara Ownsworth, Vladimir Estivill-Castro, Matt Stainer, Rene Hexel, Andrew Bulmer, Wendy Moyle, Allison Waters, David Neumann, Jayke Bennett

Abstract:

The success of global vaccination programs is reliant on the uptake of vaccines to achieve herd immunity. Yet, many individuals do not obtain vaccines or venipuncture procedures when needed. Whilst health education may be effective for those individuals who are hesitant due to safety or efficacy concerns, for many of these individuals, the primary concern relates to blood or injection fear or phobia (BII). BII is highly prevalent and associated with a range of negative health impacts, both at individual and population levels. Exposure therapy is an efficacious treatment for specific phobias, including BII, but has high patient dropout and low implementation by therapists. Whilst virtual reality approaches exposure therapy may be more acceptable, they have similarly low rates of implementation by therapists and are often difficult to tailor to an individual client’s needs. It was proposed that a piloted robot may be able to adequately facilitate fear induction and be an acceptable approach to exposure therapy. The current study examined fear induction responses, acceptability, and feasibility of a piloted robot for BII exposure. A Nao humanoid robot was programmed to connect with a virtual reality head-mounted display, enabling live streaming and exploration of real environments from a distance. Thirty adult participants with BII fear were randomly assigned to robot-pilot or virtual reality exposure conditions in a laboratory-based fear exposure task. All participants also completed a computer-based two-dimensional exposure task, with an order of conditions counterbalanced across participants. Measures included fear (heart rate variability, galvanic skin response, stress indices, and subjective units of distress), engagement with a feared stimulus (eye gaze: time to first fixation and a total number of fixations), acceptability, and perceived treatment credibility. Preliminary results indicate that fear responses can be adequately induced via a robot-piloted platform. Further results will be discussed, as will implications for the treatment of BII phobia and other fears. It is anticipated that piloted robots may provide a useful platform for facilitating exposure therapy, being more acceptable than in-vivo exposure and more flexible than virtual reality exposure.

Keywords: anxiety, digital mental health, exposure therapy, phobia, robot, virtual reality

Procedia PDF Downloads 72
1826 QSAR Study on Diverse Compounds for Effects on Thermal Stability of a Monoclonal Antibody

Authors: Olubukayo-Opeyemi Oyetayo, Oscar Mendez-Lucio, Andreas Bender, Hans Kiefer

Abstract:

The thermal melting curve of a protein provides information on its conformational stability and could provide cues on its aggregation behavior. Naturally-occurring osmolytes have been shown to improve the thermal stability of most proteins in a concentration-dependent manner. They are therefore commonly employed as additives in therapeutic protein purification and formulation. A number of intertwined and seemingly conflicting mechanisms have been put forward to explain the observed stabilizing effects, the most prominent being the preferential exclusion mechanism. We attempted to probe and summarize molecular mechanisms for thermal stabilization of a monoclonal antibody (mAb) by developing quantitative structure-activity relationships using a rationally-selected library of 120 osmolyte-like compounds in the polyhydric alcohols, amino acids and methylamines classes. Thermal stabilization potencies were experimentally determined by thermal shift assays based on differential scanning fluorimetry. The cross-validated QSAR model was developed by partial least squares regression using descriptors generated from Molecular Operating Environment software. Careful evaluation of the results with the use of variable importance in projection parameter (VIP) and regression coefficients guided the selection of the most relevant descriptors influencing mAb thermal stability. For the mAb studied and at pH 7, the thermal stabilization effects of tested compounds correlated positively with their fractional polar surface area and inversely with their fractional hydrophobic surface area. We cannot claim that the observed trends are universal for osmolyte-protein interactions because of protein-specific effects, however this approach should guide the quick selection of (de)stabilizing compounds for a protein from a chemical library. Further work with a large variety of proteins and at different pH values would help the derivation of a solid explanation as to the nature of favorable osmolyte-protein interactions for improved thermal stability. This approach may be beneficial in the design of novel protein stabilizers with optimal property values, especially when the influence of solution conditions like the pH and buffer species and the protein properties are factored in.

Keywords: thermal stability, monoclonal antibodies, quantitative structure-activity relationships, osmolytes

Procedia PDF Downloads 326
1825 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains

Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou

Abstract:

With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.

Keywords: production planning, inventory routing, column generation, mixed-integer linear programming

Procedia PDF Downloads 108
1824 Digital Structural Monitoring Tools @ADaPT for Cracks Initiation and Growth due to Mechanical Damage Mechanism

Authors: Faizul Azly Abd Dzubir, Muhammad F. Othman

Abstract:

Conventional structural health monitoring approach for mechanical equipment uses inspection data from Non-Destructive Testing (NDT) during plant shut down window and fitness for service evaluation to estimate the integrity of the equipment that is prone to crack damage. Yet, this forecast is fraught with uncertainty because it is often based on assumptions of future operational parameters, and the prediction is not continuous or online. Advanced Diagnostic and Prognostic Technology (ADaPT) uses Acoustic Emission (AE) technology and a stochastic prognostic model to provide real-time monitoring and prediction of mechanical defects or cracks. The forecast can help the plant authority handle their cracked equipment before it ruptures, causing an unscheduled shutdown of the facility. The ADaPT employs process historical data trending, finite element analysis, fitness for service, and probabilistic statistical analysis to develop a prediction model for crack initiation and growth due to mechanical damage. The prediction model is combined with live equipment operating data for real-time prediction of the remaining life span owing to fracture. ADaPT was devised at a hot combined feed exchanger (HCFE) that had suffered creep crack damage. The ADaPT tool predicts the initiation of a crack at the top weldment area by April 2019. During the shutdown window in April 2019, a crack was discovered and repaired. Furthermore, ADaPT successfully advised the plant owner to run at full capacity and improve output by up to 7% by April 2019. ADaPT was also used on a coke drum that had extensive fatigue cracking. The initial cracks are declared safe with ADaPT, with remaining crack lifetimes extended another five (5) months, just in time for another planned facility downtime to execute repair. The prediction model, when combined with plant information data, allows plant operators to continuously monitor crack propagation caused by mechanical damage for improved maintenance planning and to avoid costly shutdowns to repair immediately.

Keywords: mechanical damage, cracks, continuous monitoring tool, remaining life, acoustic emission, prognostic model

Procedia PDF Downloads 71
1823 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 123
1822 Sustainable Organization for Sustainable Strategy: An Empirical Evidence

Authors: Lucia Varra, Marzia Timolo

Abstract:

The interest of scholars towards corporate sustainability has strengthened in recent years in parallel with the growing need to undertake paths of cultural and organizational change, as a way for greater competitiveness and stakeholders’ satisfaction. In fact, studies on the business sustainability, while on the one hand have integrated the three dimensions of sustainability that existed for some time in the economic approaches (economic, environmental and social dimensions), on the other hand did not give rise to an organic construct that puts together the aspects of strategic management with corporate social responsibility and even less with the organizational issues. Therefore some important questions remain open: Which organizational structure and which operational mechanisms are coherent or propitious to a sustainability strategy? Existing studies appear to be fragmented, although some aspects have shared importance: knowledge management, human resource, management, leadership, innovation, etc. The construction of a model of sustainable organization that supports the sustainability strategy no longer seems to be postponed, as is its connection with the main practices of measuring corporate social responsibility performance. The paper aims to identify the organizational characteristics of a sustainable corporate. To this end, from a theoretical point of view the work examines the main existing literary contributions and, from a practical point of view, it presents a business case referring to a service organization that for years has undertaken the sustainability strategy. This paper is divided into two parts: the first part concerns a review of the main articles on the strategic management topic and the main organizational issues raised by the literature, such as knowledge management, leadership, innovation, etc.; later, a modeling of the main variables examined by scholars and an integration of these with the international measurement standards of CSR is proposed. In the second part, using the methodology of the case study company, the hypotheses and the structure of the proposed model that aims to integrate the strategic issues with the organizational aspects and measurement of sustainability performance, are applied to an Italian company, which has some organizational and human resource management interventions are in place to align strategic decisions with the structure and operating mechanisms of the structure. The case presented supports the hypotheses of the model.

Keywords: CSR, strategic management, sustainable leadership, sustainable human resource management, sustainable organization

Procedia PDF Downloads 97
1821 Simulation of an Active Controlled Vibration Isolation System for Astronaut’s Exercise Platform

Authors: Shield B. Lin, Sameer Abdali

Abstract:

Computer simulations were performed using MATLAB/Simulink for a vibration isolation system for astronaut’s exercise platform. Simulation parameters initially were based on an on-going experiment in a laboratory at NASA Johnson Space Center. The authors expanded later simulations to include other parameters. A discrete proportional-integral-derivative controller with a low-pass filter commanding a linear actuator served as the active control unit to push and pull a counterweight in balancing the disturbance forces. A spring-damper device is used as an optional passive control unit. Simulation results indicated such design could achieve near complete vibration isolation with small displacements of the exercise platform.

Keywords: control, counterweight, isolation, vibration

Procedia PDF Downloads 144
1820 Review of Literature: Using Technology to Help Language Learners at Improving Their Language Skills

Authors: Eyup Bayram Guzel, Osman Tunc

Abstract:

People have been fairly interested in what technology offers to them around a scope of human necessities and it has become a part of human life. In this study, experimental studies were reviewed for the purpose of how technology helps language learners improve their phonemic awareness, reading comprehension and vocabulary development skills. As a conclusion, experimental studies demonstrated that students showed significant improvements up to 70% in phonological awareness, while they demonstrated up to 76% of improvements in reading comprehension and up to 77% in vocabulary development. The use of computer-assisted technologies and its positive outcomes were encouraged to be used more widely in order to meet the diverse needs of students.

Keywords: technology, phonemic awareness, reading comprehension, vocabulary development

Procedia PDF Downloads 277
1819 Disrupting Traditional Industries: A Scenario-Based Experiment on How Blockchain-Enabled Trust and Transparency Transform Nonprofit Organizations

Authors: Michael Mertel, Lars Friedrich, Kai-Ingo Voigt

Abstract:

Based on principle-agent theory, an information asymmetry exists in the traditional donation process. Consumers cannot comprehend whether nonprofit organizations (NPOs) use raised funds according to the designated cause after the transaction took place (hidden action). Therefore, charity organizations have tried to appear transparent and gain trust by using the same marketing instruments for decades (e.g., releasing project success reports). However, none of these measures can guarantee consumers that charities will use their donations for the purpose. With awareness of misuse of donations rising due to the Ukraine conflict (e.g., funding crime), consumers are increasingly concerned about the destination of their charitable purposes. Therefore, innovative charities like the Human Rights Foundation have started to offer donations via blockchain. Blockchain technology has the potential to establish profound trust and transparency in the donation process: Consumers can publicly track the progress of their donation at any time after deciding to donate. This ensures that the charity is not using donations against its original intent. Hence, the aim is to investigate the effect of blockchain-enabled transactions on the willingness to donate. Sample and Design: To investigate consumers' behavior, we use a scenario-based experiment. After removing participants (e.g., due to failed attention checks), 3192 potential donors participated (47.9% female, 62.4% bachelor or above). Procedure: We randomly assigned the participants to one of two scenarios. In all conditions, the participants read a scenario about a fictive charity organization called "Helper NPO." Afterward, the participants answered questions regarding their perception of the charity. Manipulation: The first scenario (n = 1405) represents a typical donation process, where consumers donate money without any option to track and trace. The second scenario (n = 1787) represents a donation process via blockchain, where consumers can track and trace their donations respectively. Using t-statistics, the findings demonstrate a positive effect of donating via blockchain on participants’ willingness to donate (mean difference = 0.667, p < .001, Cohen’s d effect size = 0.482). A mediation analysis shows significant effects for the mediation of transparency (Estimate = 0.199, p < .001), trust (Estimate = 0.144, p < .001), and transparency and trust (Estimate = 0.158, p < .001). The total effect of blockchain usage on participants’ willingness to donate (Estimate = 0.690, p < .001) consists of the direct effect (Estimate = 0.189, p < .001) and the indirect effects of transparency and trust (Estimate = 0.501, p < .001). Furthermore, consumers' affinity for technology moderates the direct effect of blockchain usage on participants' willingness to donate (Estimate = 0.150, p < .001). Donating via blockchain is a promising way for charities to engage consumers for several reasons: (1) Charities can emphasize trust and transparency in their advertising campaigns. (2) Established charities can target new customer segments by specifically engaging technology-affine consumers in the future. (3) Charities can raise international funds without previous barriers (e.g., setting up bank accounts). Nevertheless, increased transparency can also backfire (e.g., disclosure of costs). Such cases require further research.

Keywords: blockchain, social sector, transparency, trust

Procedia PDF Downloads 93
1818 Iranian English as Foreign Language Teachers' Psychological Well-Being across Gender: During the Pandemic

Authors: Fatemeh Asadi Farsad, Sima Modirkhameneh

Abstract:

The purpose of this study was to explore the pattern of Psychological Well-Being (PWB) of Iranian male and female EFL teachers during the pandemic. It was intended to see if such a drastic change in the context and mode of teaching affects teachers' PWB. Furthermore, the possible difference between the six elements of PWB of Iranian EFL male vs. female teachers during the pandemic was investigated. The other purpose was to find out the EFL teachers’ perceptions of any modifications, and factors leading to such modifications in their PWB during pandemic. For the purpose of this investigation, a total of 81 EFL teachers (59 female, 22 male) with an age range of 25 to 35 were conveniently sampled from different cities in Iran. Ryff’s PWB questionnaire was sent to participant teachers through online platforms to elicit data on their PWB. As for their perceptions on the possible modifications and the factors involved in PWB during pandemic, a set of semi-structured interviews were run among both sample groups. The findings revealed that male EFL teachers had the highest mean on personal growth, followed by purpose of life, and self-acceptance and the lowest mean on environmental mastery. With a slightly similar pattern, female EFL teachers had the highest mean on personal growth, followed by purpose in life, and positive relationship with others with the lowest mean on environmental mastery. However, no significant difference was observed between the male and female groups’ overall means on elements of PWB. Additionally, participants perceived that their anxiety level in online classes altered due to factors like (1) Computer literacy skills, (2) Lack of social communications and interactions with colleagues and students, (3) Online class management, (4) Overwhelming workloads, and (5) Time management. The study ends with further suggestions as regards effective online teaching preparation considering teachers PWB, especially at severe situations such as covid-19 pandemic. The findings offer to determine the reformations of educational policies concerning enhancing EFL teachers’ PWB through computer literacy courses and stress management courses. It is also suggested that to proactively support teachers’ mental health, it is necessary to provide them with advisors and psychologists if possible for free. Limitations: One limitation is the small number of participants (81), suggesting that future replications should include more participants for reliable findings. Another limitation is the gender imbalance, which future studies should address to yield better outcomes. Furthermore, Limited data gathering tools suggest using observations, diaries, and narratives for more insights in future studies. The study focused on one model of PWB, calling for further research on other models in the literature. Considering the wide effect of the COVID-19 pandemic, future studies should consider additional variables (e.g., teaching experience, age, income) to understand Iranian EFL teachers’ vulnerabilities and strengths better.

Keywords: online teaching, psychological well-being, female and male EFL teachers, pandemic

Procedia PDF Downloads 42