Search results for: coordinate measuring machines (CMM)
405 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow
Authors: Shan Zhang, Peter Suechting
Abstract:
Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.Keywords: environmental economics, machine learning, recycling, international trade
Procedia PDF Downloads 168404 Quantum Coherence Sets the Quantum Speed Limit for Mixed States
Authors: Debasis Mondal, Chandan Datta, S. K. Sazim
Abstract:
Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information
Procedia PDF Downloads 353403 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)
Authors: Sultan Ben Jaber
Abstract:
Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor
Procedia PDF Downloads 70402 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 339401 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape
Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin
Abstract:
It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.Keywords: daily light integral, plant design, urban open space
Procedia PDF Downloads 510400 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 112399 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 23398 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels
Authors: Joshua Buli, David Pietrowski, Samuel Britton
Abstract:
Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization
Procedia PDF Downloads 84397 Is Presence of Psychotic Features Themselves Carry a Risk for Metabolic Syndrome?
Authors: Rady A., Elsheshai A., Elsawy M., Nagui R.
Abstract:
Background and Aim: Metabolic syndrome affect around 20% of general population , authors have incriminated antipsychotics as serious risk factor that may provoke such derangement. The aim of our study is to assess metabolic syndrome in patients presenting psychotic features (delusions and hallucinations) whether schizophrenia or mood disorder and compare results in terms of drug naïf, on medication and healthy control. Subjects and Methods: The study recruited 40 schizophrenic patients, half of them drug naïf and the other half on antipsychotics, 40 patients with mood disorder with psychotic features, half of them drug naïf and the other half on medication, 20 healthy control. Exclusion criteria were put in order to exclude patients having already endocrine or metabolic disorders that my interfere with results obtain to minimize confusion bias. Metabolic syndrome assessed by measuring parameters including weight, body mass index, waist circumference, triglyceride level, HDL, fasting glucose, fasting insulin and insulin resistance Results: No difference was found when comparing drug naïf to those on medication in both schizophrenic and psychotic mood disorder arms, schizophrenic patients whether on medication or drug naïf should difference with control group for fasting glucose, schizophrenic patients on medication also showed difference in insulin resistance compared to control group. On the other hand, patients with psychotic mood disorder whether drug naïf or on medication showed difference from control group for fasting insulin level. Those on medication also differed from control for insulin resistance Conclusion: Our study didn’t reveal difference in metabolic syndrome among patients with psychotic features whether on medication or drug naïf. Only patients with Psychotic features on medication showed insulin resistance. Schizophrenic patients drug naïf or on medication tend to show higher fasting glucose while psychotic mood disorder whether drug naïf or on medication tend to show higher fasting insulin. This study suggest that presence of psychotic features themselves regardless being on medication or not carries a risk for insulin resistance and metabolic syndrome. Limitation: This study is limited by number of participants and larger numbers in future studies should be included in order to extrapolate results. Cohort longitudinal studies are needed in order to evaluate such hypothesis.Keywords: schizophrenia, metabolic syndrome, psychosis, insulin, resistance
Procedia PDF Downloads 535396 Exploring the Use of Augmented Reality for Laboratory Lectures in Distance Learning
Authors: Michele Gattullo, Vito M. Manghisi, Alessandro Evangelista, Enricoandrea Laviola
Abstract:
In this work, we explored the use of Augmented Reality (AR) to support students in laboratory lectures in Distance Learning (DL), designing an application that proved to be ready for use next semester. AR could help students in the understanding of complex concepts as well as increase their motivation in the learning process. However, despite many prototypes in the literature, it is still less used in schools and universities. This is mainly due to the perceived limited advantages to the investment costs, especially regarding changes needed in the teaching modalities. However, with the spread of epidemiological emergency due to SARS-CoV-2, schools and universities were forced to a very rapid redefinition of consolidated processes towards forms of Distance Learning. Despite its many advantages, it suffers from the impossibility to carry out practical activities that are of crucial importance in STEM ("Science, Technology, Engineering e Math") didactics. In this context, AR perceived advantages increased a lot since teachers are more prepared for new teaching modalities, exploiting AR that allows students to carry on practical activities on their own instead of being physically present in laboratories. In this work, we designed an AR application for the support of engineering students in the understanding of assembly drawings of complex machines. Traditionally, this skill is acquired in the first years of the bachelor's degree in industrial engineering, through laboratory activities where the teacher shows the corresponding components (e.g., bearings, screws, shafts) in a real machine and their representation in the assembly drawing. This research aims to explore the effectiveness of AR to allow students to acquire this skill on their own without physically being in the laboratory. In a preliminary phase, we interviewed students to understand the main issues in the learning of this subject. This survey revealed that students had difficulty identifying machine components in an assembly drawing, matching between the 2D representation of a component and its real shape, and understanding the functionality of a component within the machine. We developed a mobile application using Unity3D, aiming to solve the mentioned issues. We designed the application in collaboration with the course professors. Natural feature tracking was used to associate the 2D printed assembly drawing with the corresponding 3D virtual model. The application can be displayed on students’ tablets or smartphones. Users could interact with selecting a component from a part list on the device. Then, 3D representations of components appear on the printed drawing, coupled with 3D virtual labels for their location and identification. Users could also interact with watching a 3D animation to learn how components are assembled. Students evaluated the application through a questionnaire based on the System Usability Scale (SUS). The survey was provided to 15 students selected among those we participated in the preliminary interview. The mean SUS score was 83 (SD 12.9) over a maximum of 100, allowing teachers to use the AR application in their courses. Another important finding is that almost all the students revealed that this application would provide significant power for comprehension on their own.Keywords: augmented reality, distance learning, STEM didactics, technology in education
Procedia PDF Downloads 128395 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India
Authors: Zafar Tabrez, Nizamuddin Khan
Abstract:
Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern
Procedia PDF Downloads 312394 Robotic Solution for Nuclear Facility Safety and Monitoring System
Authors: Altab Hossain, Shakerul Islam, Golamur R. Khan, Abu Zafar M. Salahuddin
Abstract:
An effective identification of breakdowns is of premier importance for the safe and reliable operation of Nuclear Power Plants (NPP) and its associated facilities. A great number of monitoring and diagnosis methodologies are applied and used worldwide in areas such as industry, automobiles, hospitals, and power plant to detect and reduce human disasters. The potential consequences of several hazardous activities may harm the society using nuclear and its associated facilities. Hence, one of the most popular and effective methods to ensure safety and monitor the entire nuclear facility and imply risk-free operation without human interference during the hazardous situation is using a robot. Therefore, in this study, an advanced autonomous robot has been designed and developed that can monitor several parameters in the NPP to ensure the safety and do some risky job in case of nuclear disaster. The robot consisted of autonomous track following unit, data processing and transmitting unit can follow a straight line and take turn as the bank greater than 90 degrees. The developed robot can analyze various parameters such as temperature, altitude, radiation, obstacle, humidity, detecting fire, measuring distance, ultrasonic scan and taking the heat of any particular object. It has an ability to broadcast live stream and can record the document to its own server memory. There is a separate control unit constructed with a baseboard which processes the recorded data and a transmitter which transmits the processed data. To make the robot user-friendly, the code is developed such a way that a user can control any of robotic arm as per types of work. To control at any place and without the track, there is an advanced code has been developed to take manual overwrite. Through this process, administrator who has logged in permission to Dynamic Host Client Protocol (DHCP) can make the handover of the control of the robot. In this process, this robot is provided maximum nuclear security from being hacked. Not only NPP, this robot can be used to maximize the real-time monitoring system of any nuclear facility as well as nuclear material transportation and decomposition system.Keywords: nuclear power plant, radiation, dynamic host client protocol, nuclear security
Procedia PDF Downloads 209393 International Retirement Migration of Westerners to Thailand: Well-Being and Future Migration Plans
Authors: Kanokwan Tangchitnusorn, Patcharawalai Wongboonsin
Abstract:
Following the ‘Golden Age of Welfare’ which enabled post-war prosperity to European citizens in 1950s, the world has witnessed the increasing mobility across borders of older citizens of First World countries. Then, in 1990s, the international retirement migration (IRM) of older persons has become a prominent trend, in which, it requires the integration of several fields of knowledge to explain, i.e. migration studies, tourism studies, as well as, social gerontology. However, while the studies of the IRM to developed destinations in Europe (e.g. Spain, Malta, Portugal, Italy), and the IRM to developing countries like Mexico, Panama, and Morocco have been largely studied in recent decades due to their massive migration volume, the study of the IRM to remoter destinations has been far more relatively sparse and incomplete. Developing countries in Southeast Asia have noticed the increasing number of retired expats, particularly to Thailand, where the number of foreigners applying for retirement visa increased from 10,709 in 2005 to 60,046 in 2014. Additionally, it was evident that the majority of Thailand’s retirement visa applicants were Westerners, i.e. citizens of the United Kingdom, the United States, Germany, and the Nordic countries, respectively. As such trend just becoming popular in Thailand in recent decades, little is known about the IRM populations, their well-being, and their future migration plans. This study aimed to examine the subjective wellbeing or the self-evaluations of own well-being among Western retirees in Thailand, as well as, their future migration plans as whether they planned to stay here for life or otherwise. The author employed a mixed method to obtain both quantitative and qualitative data during October 2015 – May 2016, including 330 self-administered questionnaires (246 online and 84 hard-copied responses), and 21 in-depth interviews of the Western residents in Nan (2), Pattaya (4), and Chiang Mai (15). As derived from the integration of previous subjective well-being measurements (i.e. Personal Wellbeing Index (PWI), Global AgeWatch Index, and OECD guideline on measuring subjective wellbeing), this study would measure the subjective well-being of Western retirees in Thailand in 7 dimensions, including standard of living, health status, personal relationships, social connections, environmental quality, personal security and local infrastructure.Keywords: international retirement migration, ageing, mobility, wellbeing, Western, Thailand
Procedia PDF Downloads 343392 The Relationship between Facebook, Religiosity and Academic Performance
Authors: Nooraisah Katmon, Hartini Jaafar, Hazianti Abdul Halim, Jessnor Elmy Mat Jizat
Abstract:
Our study empirically examines the effect of student activities on Facebook and religion on academic performance. We extend prior research in this area in a number of ways. First, given the paucity of the research in this area particularly from the Asian context, we provide the evidence from developing country like Malaysia. Second, our sample drawn from Sultan Idris Education University in Malaysia, where graduates from these universities are unique since they are expected to be able to work in both education and industry environment, and presumed to play significant roles in shaping the development of future student’s intellectual at the Malaysian secondary school and Malaysian economy in general. Third, we control for religiosity aspect when examining the association between Facebook and academic performance, something that has been predominantly neglected by the prior studies. Fourth, unlike prior studies that circulating around the Christian sphere in measuring religiosity, we provide evidence from the Islamic perspective where the act of worships and practices are much more comprehensive rather than the Christian counterparts. Fifth, we examine whether Facebook activities and religiosity are complementary or substitutive each other in improving student’s academic performance. Our sample comprise of 60 undergraduates. Our result exhibit that students with high number of friends on facebook and frequent engagement on facebook activities, such as sharing links, send message, posting photo, tagging video as well as spending long hours on facebook generally are associated with lower academic performance. Our results also reported that student’s engagement in religious activities promotes better academic performance. When we examine the potential interaction effect between facebook and religiosity, our result revealed that religiosity is effective in reducing student’s interest on facebook, hence lead to better academic achievement. In other words, religious student will be less interested in joining activities on facebook and make them more perform than their counterparts. Our findings from this study should be able to assist the university management in shaping university policies and curriculum to regulate and manage student’s activities in order to enhance overall student’s quality. Moreover, the findings from this study are also of use to the policy maker such as Malaysian Communication and Multimedia Commissions to regulate the policy on the student’s access and activities on facebook.Keywords: facebook, religiosity, academic performance, effect of student activities
Procedia PDF Downloads 303391 Assessing the High Rate of Deforestation Caused by the Operations of Timber Industries in Ghana
Authors: Obed Asamoah
Abstract:
Forests are very vital for human survival and our well-being. During the past years, the world has taken an increasingly significant role in the modification of the global environment. The high rate of deforestation in Ghana is of primary national concern as the forests provide many ecosystem services and functions that support the country’s predominantly agrarian economy and foreign earnings. Ghana forest is currently major source of carbon sink that helps to mitigate climate change. Ghana forests, both the reserves and off-reserves, are under pressure of deforestation. The causes of deforestation are varied but can broadly be categorized into anthropogenic and natural factors. For the anthropogenic factors, increased wood fuel collection, clearing of forests for agriculture, illegal and poorly regulated timber extraction, social and environmental conflicts, increasing urbanization and industrialization are the primary known causes for the loss of forests and woodlands. Mineral exploitation in the forest areas is considered as one of the major causes of deforestation in Ghana. Mining activities especially mining of gold by both the licensed mining companies and illegal mining groups who are locally known as "gallantly mining" also cause damage to the nation's forest reserves. Several works have been conducted regarding the causes of the high rate of deforestation in Ghana, major attention has been placed on illegal logging and using forest lands for illegal farming and mining activities. Less emphasis has been placed on the timber production companies on their harvesting methods in the forests in Ghana and other activities that are carried out in the forest. The main objective of the work is to find out the harvesting methods and the activities of the timber production companies and their effects on the forests in Ghana. Both qualitative and quantitative research methods were engaged in the research work. The study population comprised of 20 Timber industries (Sawmills) forest areas of Ghana. These companies were selected randomly. The cluster sampling technique was engaged in selecting the respondents. Both primary and secondary data were employed. In the study, it was observed that most of the timber production companies do not know the age, the weight, the distance covered from the harvesting to the loading site in the forest. It was also observed that old and heavy machines are used by timber production companies in their operations in the forest, which makes the soil compact prevents regeneration and enhances soil erosion. It was observed that timber production companies do not abide by the rules and regulations governing their operations in the forest. The high rate of corruption on the side of the officials of the Ghana forestry commission makes the officials relax and do not embark on proper monitoring on the operations of the timber production companies which makes the timber companies to cause more harm to the forest. In other to curb this situation the Ghana forestry commission with the ministry of lands and natural resources should monitor the activities of the timber production companies and sanction all the companies that make foul play in their activities in the forest. The commission should also pay more attention to the policy “fell one plant 10” to enhance regeneration in both reserves and off-reserves forest.Keywords: companies, deforestation, forest, Ghana, timber
Procedia PDF Downloads 197390 Antibacterial Effects of Some Medicinal and Aromatic Plant Extracts on Pathogenic Bacteria Isolated from Pear Orchards
Authors: Kubilay Kurtulus Bastas
Abstract:
Bacterial diseases are very destructive and cause economic losses on pears. Promising plant extracts for the management of plant diseases are environmentally safe, long-lasting and extracts of certain plants contain alkaloids, tannins, quinones, coumarins, phenolic compounds, and phytoalexins. In this study, bacteria were isolated from different parts of pear exhibiting characteristic symptoms of bacterial diseases from the Central Anatolia, Turkey. Pathogenic bacteria were identified by morphological, physiological, biochemical and molecular methods as fire blight (Erwinia amylovora (39%)), bacterial blossom blast and blister bark (Pseudomonas syringae pv. syringae (22%)), crown gall (Rhizobium radiobacter (1%)) from different pear cultivars, and determined virulence levels of the pathogens with pathogenicity tests. The air-dried 25 plant material was ground into fine powder and extraction was performed at room temperature by maceration with 80% (v/v) methanol/distilled water. The minimum inhibitory concentration (MIC) values were determined by using modified disc diffusion method at five different concentrations and streptomycin sulphate was used as control chemical. Bacterial suspensions were prepared as 108 CFU ml⁻¹ densities and 100 µl bacterial suspensions were spread to TSA medium. Antimicrobial activity was evaluated by measuring the inhibition zones in reference to the test organisms. Among the tested plants, Origanum vulgare, Hedera helix, Satureja hortensis, Rhus coriaria, Eucalyptus globulus, Rosmarinus officinalis, Ocimum basilicum, Salvia officinalis, Cuminum cyminum and Thymus vulgaris showed a good antibacterial activity and they inhibited the growth of the pathogens with inhibition zone diameter ranging from 7 to 27 mm at 20% (w/v) in absolute methanol in vitro conditions. In vivo, the highest efficacy was determined as 27% on reducing tumor formation of R. radiobacter, and 48% and 41% on reducing shoot blight of E. amylovora and P. s. pv. syringae on pear seedlings, respectively. Obtaining data indicated that some plant extracts may be used against the bacterial diseases on pome fruits within sustainable and organic management programs.Keywords: bacteria, eco-friendly management, organic, pear, plant extract
Procedia PDF Downloads 335389 Safety Evaluation of Post-Consumer Recycled PET Materials in Chilean Industry by Overall Migration Tests
Authors: Evelyn Ilabaca, Ximena Valenzuela, Alejandra Torres, María José Galotto, Abel Guarda
Abstract:
One of the biggest problems in food packaging industry, especially with the plastic materials, is the fact that these materials are usually obtained from non-renewable resources and also remain as waste after its use, causing environmental issues. This is an international concern and particular attention is given to reduction, reuse and recycling strategies for decreasing the waste from plastic packaging industry. In general, polyethylenes represent most plastic waste and recycling process of post-consumer polyethylene terephthalate (PCR-PET) has been studied. US Food and Drug Administration (FDA), European Food Safety Authority (EFSA) and Southern Common Market (MERCOSUR) have generated different legislative documents to control the use of PCR-PET in the production of plastic packaging intended direct food contact in order to ensure the capacity of recycling process to remove possible contaminants that can migrate into food. Consequently, it is necessary to demonstrate by challenge test that the recycling process is able to remove specific contaminants, obtaining a safe recycled plastic to human health. These documents establish that the concentration limit for substitute contaminants in PET is 220 ppb (ug/kg) and the specific migration limit is 10 ppb (ug/kg) for each contaminant, in addition to assure the sensorial characteristics of food are not affected. Moreover, under the Commission Regulation (EU) N°10/2011 on plastic materials and articles intended to come into contact with food, it is established that overall migration limit is 10 mg of substances per 1 dm2 of surface area of the plastic material. Thus, the aim of this work is to determine the safety of PCR-PET-containing food packaging materials in Chile by measuring their overall migration, and their comparison with the established limits at international level. This information will serve as a basis to provide a regulation to control and regulate the use of recycled plastic materials in the manufacture of plastic packaging intended to be in direct contact with food. The methodology used involves a procedure according to EN-1186:2002 with some modifications. The food simulants used were ethanol 10 % (v/v) and acetic acid 3 % (v/v) as aqueous food simulants, and ethanol 95 % (v/v) and isooctane as substitutes of fatty food simulants. In this study, preliminary results showed that Chilean food packaging plastics with different PCR-PET percentages agree with the European Legislation for food aqueous character.Keywords: contaminants, polyethylene terephthalate, plastic food packaging, recycling
Procedia PDF Downloads 276388 Effect of Celebrity Endorsements and Social Media Influencers on Brand Loyalty: A Comparative Study
Authors: Dhruv Saini, Megha Sharma, Sharad Gupta
Abstract:
This research is showing the use of celebrity endorsement and social media influencers and how they help in enhancing the brand loyalty of the consumers. The study aims at keeping brand image of the brand as the link between the two. However, choosing the right celebrity or social media influencer is not an easy task and it is very essential for a brand to select the right ambassador for advertising their products and for selling the product to the ultimate consumer. The purpose of the study is to create a relationship of Celebrity endorsement with brand image and with brand loyalty and creating a relationship of Social media influencers with brand image and with brand loyalty and then making a comparison between the two by measuring the effects of both simultaneously. And then by analyzing which among the two has a greater impact on brand loyalty of the consumers. The study mainly focuses on four major variables namely Celebrity endorsement, Social media influencers, Brand image and Brand loyalty. The study also focuses on interdependence and relationships that these variables have with each other and how they are linked with each other. The study also aims at looking which among Celebrity endorsement and Social media influencer has a greater impact on increasing or enhancing the loyalty for a brand. Earlier celebrity endorsers had a major impact on brand loyalty of the consumers but with time social media influencers are also playing a very vital role in impacting the brand loyalty of the consumers and are giving a fight to the celebrity endorsers as well. Also, Brand image also has a very vital role to play in enhancing the brand loyalty of a brand in the minds of the consumers as a well-known and a better perception of a brand leads to retention of more and more consumers. Also, both Celebrity endorsement and Social media influencers are two-way swords as both have a number of positives and a number of negatives as well, so these are to be compared keeping in mind their adverse effects. Examination of the current market situation has shown that the recommendations of celebrities when properly integrated by comparing product strengths. Advertisers agree that celebrity authorization does not guarantee sales but it can create buzz and make the consumer feel better by-product, which is also what customers should expect as a real star by delivering the promise. On the other hand, depending on the results of the studies, there should be a variety of conclusions planned. Some of the influential people on social media had a positive impact on the product portrait. One of the conclusions is that the product image had a positive impact on consumers. Moreover, the results of the following study states that the most influential influencers consumers in their intended purpose of the purchase, but instead produced a positive result indirectly with Brand image which would further lead to brand loyalty .Keywords: brand image, brand loyalty, celebrity endorsement, social media influencer
Procedia PDF Downloads 193387 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 148386 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers
Authors: Raúl Jiménez Vilches
Abstract:
Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.Keywords: focus, prosody, public speaking, Spanish learners of English
Procedia PDF Downloads 99385 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice
Authors: T. Ewetumo, K. D. Adedayo, Festus Ben
Abstract:
Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation
Procedia PDF Downloads 357384 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method
Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer
Abstract:
This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper
Procedia PDF Downloads 344383 Intervention Program for Emotional Management in Disruptive Situations Through Self-Compassion and Compassion
Authors: M. Bassas, J. Grané-Morcillo, J. Segura, J. M. Soldevila
Abstract:
Mental health prevention is key in a society where, according to the World Health Organization, the fourth leading cause of death worldwide is suicide. Compassion is closely linked to personal growth. It shows once again that therapies based on prevention remain an urgent and social need. In this sense, a growing body of research demonstrates how cultivating a compassionate mind can help alleviate and prevent a variety of psychological problems. In the early 21st century, there has been a boom in third-generation compassion-based therapies, although there is a lack of empirical evidence of their efficacy. This study proposes a psychotherapy method (‘Being Method’), whose central axis revolves around emotional management through the cultivation of compassion. Therefore, the objective of this research was to analyze the effectiveness of this method with regard to the emotional changes experienced when we focus on what we are concerned about through the filter of compassion. The Being Method was born from the influence of Buddhist philosophy and contemporary psychology based mainly on Western rationalist currents. A quantitative cross-sectional study has been carried out in a sample of women between 18 and 53 years old (n=47; Mage=36.02; SDage= 11.86) interested in personal growth in which the following 6 measuring instruments were administered: Peace of mind Scale (PoM), Rosenberg Self-Esteem Scale (RSES), Subjective Happiness Scale (SHS), 2 Sacles of the Compassionate Action and Engagement Scales (CAES), Coping Response Inventory for Adults (CRI-A) and Cognitive-Behavioral Strategies Evaluation Scale (MOLDES). Following an experimental method approach, participants were divided into an experimental and control group. Longitudinal analysis was also carried out through a pre-post program comparison. Pre-post comparison outcomes indicated significant differences (p<.05) between before and after the therapy in the variables Peace of Mind, Self-esteem, Happiness, Self-compassion (A-B), Compassion (A-B), in several mental molds, as well as in several coping strategies. Also, between-groups tests proved significantly higher means obtained in the experimental group. Thus, these outcomes highlighted the effectiveness of the therapy, improving all the analyzed dimensions. The social, clinical and research implications are discussed.Keywords: being method, compassion, effectiveness, emotional management, intervention program, personal growth therapy
Procedia PDF Downloads 41382 High Performance Liquid Cooling Garment (LCG) Using ThermoCore
Authors: Venkat Kamavaram, Ravi Pare
Abstract:
Modern warfighters experience extreme environmental conditions in many of their operational and training activities. In temperatures exceeding 95°F, the body’s temperature regulation can no longer cool through convection and radiation. In this case, the only cooling mechanism is evaporation. However, evaporative cooling is often compromised by excessive humidity. Natural cooling mechanisms can be further compromised by clothing and protective gear, which trap hot air and moisture close to the body. Creating an efficient heat extraction apparel system that is also lightweight without hindering dexterity or mobility of personnel working in extreme temperatures is a difficult technical challenge and one that needs to be addressed to increase the probability for the future success of the US military. To address this challenge, Oceanit Laboratories, Inc. has developed and patented a Liquid Cooled Garment (LCG) more effective than any on the market today. Oceanit’s LCG is a form-fitting garment with a network of thermally conductive tubes that extracts body heat and can be worn under all authorized and chemical/biological protective clothing. Oceanit specifically designed and developed ThermoCore®, a thermally conductive polymer, for use in this apparel, optimizing the product for thermal conductivity, mechanical properties, manufacturability, and performance temperatures. Thermal Manikin tests were conducted in accordance with the ASTM test method, ASTM F2371, Standard Test Method for Measuring the Heat Removal Rate of Personal Cooling Systems Using a Sweating Heated Manikin, in an environmental chamber using a 20-zone sweating thermal manikin. Manikin test results have shown that Oceanit’s LCG provides significantly higher heat extraction under the same environmental conditions than the currently fielded Environmental Control Vest (ECV) while at the same time reducing the weight. Oceanit’s LCG vests performed nearly 30% better in extracting body heat while weighing 15% less than the ECV. There are NO cooling garments in the market that provide the same thermal extraction performance, form-factor, and reduced weight as Oceanit’s LCG. The two cooling garments that are commercially available and most commonly used are the Environmental Control Vest (ECV) and the Microclimate Cooling Garment (MCG).Keywords: thermally conductive composite, tubing, garment design, form fitting vest, thermocore
Procedia PDF Downloads 114381 Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Authors: Wei Liu, Shuquan Wang, Yang Gao
Abstract:
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.Keywords: microgravity science experiment rack, microgravity vibration isolation system, PID control, vibration isolation performance
Procedia PDF Downloads 160380 Effect of Human Resources Accounting on Financial Performance of Banks in Nigeria
Authors: Oti Ibiam, Alexanda O. Kalu
Abstract:
Human Resource Accounting is the process of identifying and measuring data about human resources and communicating this information to interested parties in order to meaningful investment decisions. In recent time, firms focus has shifted to human resource accounting so as to ensure efficiency and effectiveness in their operations. This study focused on the effect of human resource accounting on the financial performance of Banks in Nigerian. The problem that led to the study revolves around the current trend whereby Nigeria banks do not efficiently account for the input of human resource in their annual statement, thereby instead of capitalizing human resources in their statement of financial position; they expend it in their income statement thereby reducing their profit after tax. The broad objective of this study is to determine the extent to which human resource accounting affects the financial performance and value of Nigerian Banks. This study is therefore considered significant because, there are still universally, grey areas to be sorted out on the subject matter of human resources accounting. In the bid to achieve the study objectives, the researcher gathered data from sixteen commercial banks. Data were collected from both primary and secondary sources using an ex-post facto research design. The data collected were then tabulated and analyzed using the multiple regression analysis. The result of hypothesis one revealed that there is a significant relationship between Capitalized Human Resource Cost and post capitalization Profit before tax of banks in Nigeria. The finding of hypothesis two revealed that the association between Capitalized Human Resource Cost and post capitalization Net worth of banks in Nigeria is significant. The finding in Hypothesis three reveals that there is a significant difference between pre and post capitalization profit before tax of banks in Nigeria. The study concludes that human resources accounting positively influenced financial performance of banks in Nigeria within the period under study. It is recommended that standards should be set for human resources identification and measurement in the banking sector and also the management of commercial banks in Nigeria should have a proper appreciation of human resource accounting. This will enable managers to take right decision regarding investment in human resource. Also, the study recommends that policies on enhancing the post capitalization profit before tax of banks in Nigeria should pay great attention to capitalized human resources cost, net worth and total asset as the variables significantly influenced post capitalization profit before tax of the studied banks in Nigeria. The limitation of the study centers on the limited number of years and companies that was adopted for the study.Keywords: capitalization, human resources cost, profit before tax, net worth
Procedia PDF Downloads 150379 Investigation of Antimicrobial Activity of Dielectric Barrier Discharge Oxygen Plasma Combined with ZnO NPs-Treated Cotton Fabric Coated with Natural Green Tea Leaf Extracts
Authors: Fatma A. Mohamed, Hend M. Ahmed
Abstract:
This research explores the antimicrobial effects of dielectric barrier discharge (DBD) oxygen plasma treatment combined with ZnO NPs on the cotton fabric, focusing on various treatment durations (5, 10, 15, 20, and 30 minutes) and discharge powers (15.5–17.35 watts) at flow rate 0.5 l/min. After treatment with oxygen plasma and ZnO NPs, the fabric was printed with green tea (Camellia sinensis) at five different concentrations. The study evaluated the treatment's effectiveness by analyzing surface wettability, specifically through wet-out time and hydrophilicity, as well as measuring contact angles. To investigate the chemical changes on the fabric's surface, attenuated total reflectance–Fourier transform infrared (ATR-FTIR) spectroscopy was employed to identify the functional groups formed as a result of the plasma treatment. This comprehensive approach aims to understand how DBD oxygen plasma treatment and ZnO nanoparticles change cotton fabric properties and enhance its antimicrobial potential, paving the way for innovative applications in textiles. In addition to the chemical analysis, the surface morphology of the O₂ plasma/ZnO NPs-treated cotton fabric was examined using scanning electron microscopy (SEM). FTIR analysis revealed an increase in polar functional groups (-COOH, -OH, and -C≡O) on the fabric's surface, contributing to enhanced hydrophilicity and functionality. The antimicrobial properties were evaluated using qualitative and quantitative methods, including agar plate assays and modified Hoenstein tests against Staphylococcus aureus and Escherichia coli. The results indicated a significant improvement in antimicrobial effectiveness for the cotton fabric treated with plasma and coated with natural extracts, maintaining this efficacy even after four washing cycles. This research demonstrates that utilizing oxygen DBD plasma/ZnO NPs treatment, combined with the absorption of tea and tulsi leaf extracts, presents a promising strategy for developing natural antimicrobial textiles. This approach is particularly relevant given the increasing medical and healthcare demands for effective antimicrobial materials. Overall, the method not only enhances the absorption of plant extracts but also significantly boosts antimicrobial efficacy, offering valuable insights for future textile applications.Keywords: cotton, ZnO NPs, green tea leaf, antimicrobial avtivity, DBD oxygen plasma
Procedia PDF Downloads 9378 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns
Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis
Abstract:
The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy
Procedia PDF Downloads 162377 Inclusive Education in Early Childhood Settings: Fostering a Diverse Learning Environment
Authors: Rodrique Watong Tchounkeu
Abstract:
This paper investigated the implementation and impact of inclusive education practices in early childhood settings (ages 3-6) with the overarching aim of fostering a diverse learning environment. The primary objectives were to assess the then-current state of inclusive practices, identify effective methodologies for accommodating diverse learning needs, and evaluate the outcomes of implementing inclusive education in early childhood settings. To achieve these objectives, a mixed-methods approach was employed, combining qualitative interviews with early childhood educators and parents, along with quantitative surveys distributed to a diverse sample of participants. The qualitative phase involved semi-structured interviews with 30 educators and 50 parents, selected through purposive sampling. The interviews aimed to gather insights into the challenges faced in implementing inclusive education, the strategies employed, and the perceived benefits and drawbacks. The quantitative phase included surveys administered to 300 early childhood educators across various settings, measuring their familiarity with inclusive practices, their perceived efficacy, and their willingness to adapt teaching methods. The results revealed a significant gap between the theoretical understanding and practical implementation of inclusive education in early childhood settings. While educators demonstrated a high level of theoretical knowledge, they faced challenges in effectively translating these concepts into practice. Parental perspectives highlighted the importance of collaboration between educators and parents in supporting inclusive education. The surveys indicated a positive correlation between educators' familiarity with inclusive practices and their willingness to adapt teaching methods, emphasizing the need for targeted professional development. The implications of this study suggested the necessity for comprehensive training programs for early childhood educators focused on the practical implementation of inclusive education strategies. Additionally, fostering stronger partnerships between educators and parents was crucial for creating a supportive learning environment for all children. By addressing these findings, this research contributed to the advancement of inclusive education practices in early childhood settings, ultimately leading to more inclusive and effective learning environments for diverse groups of young learners.Keywords: inclusive education, early childhood settings, diverse learning, young learners, practical implementation, parental collaboration
Procedia PDF Downloads 67376 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 177