Search results for: energy efficient performance
323 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation
Authors: Fatma Refaat Ahmed, Sally Mohamed Farghaly
Abstract:
Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities
Procedia PDF Downloads 68322 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)
Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani
Abstract:
According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil
Procedia PDF Downloads 144321 The Effect of Emotional Intelligence on Physiological Stress of Managers
Authors: Mikko Salminen, Simo Järvelä, Niklas Ravaja
Abstract:
One of the central models of emotional intelligence (EI) is that of Mayer and Salovey’s, which includes ability to monitor own feelings and emotions and those of others, ability to discriminate different emotions, and to use this information to guide thinking and actions. There is vast amount of previous research where positive links between EI and, for example, leadership successfulness, work outcomes, work wellbeing and organizational climate have been reported. EI has also a role in the effectiveness of work teams, and the effects of EI are especially prominent in jobs requiring emotional labor. Thus, also the organizational context must be taken into account when considering the effects of EI on work outcomes. Based on previous research, it is suggested that EI can also protect managers from the negative consequences of stress. Stress may have many detrimental effects on the manager’s performance in essential work tasks. Previous studies have highlighted the effects of stress on, not only health, but also, for example, on cognitive tasks such as decision-making, which is important in managerial work. The motivation for the current study came from the notion that, unfortunately, many stressed individuals may not be aware of the circumstance; periods of stress-induced physiological arousal may be prolonged if there is not enough time for recovery. To tackle this problem, physiological stress levels of managers were collected using recording of heart rate variability (HRV). The goal was to use this data to provide the managers with feedback on their stress levels. The managers could access this feedback using a www-based learning environment. In the learning environment, in addition to the feedback on stress level and other collected data, also developmental tasks were provided. For example, those with high stress levels were sent instructions for mindfulness exercises. The current study focuses on the relation between the measured physiological stress levels and EI of the managers. In a pilot study, 33 managers from various fields wore the Firstbeat Bodyguard HRV measurement devices for three consecutive days and nights. From the collected HRV data periods (minutes) of stress and recovery were detected using dedicated software. The effects of EI on HRV-calculated stress indexes were studied using Linear Mixed Models procedure in SPSS. There was a statistically significant effect of total EI, defined as an average score of Schutte’s emotional intelligence test, on the percentage of stress minutes during the whole measurement period (p=.025). More stress minutes were detected on those managers who had lower emotional intelligence. It is suggested, that high EI provided managers with better tools to cope with stress. Managing of own emotions helps the manager in controlling possible negative emotions evoked by, e.g., critical feedback or increasing workload. High EI managers may also be more competent in detecting emotions of others, which would lead to smoother interactions and less conflicts. Given the recent trend to different quantified-self applications, it is suggested that monitoring of bio-signals would prove to be a fruitful direction to further develop new tools for managerial and leadership coaching.Keywords: emotional intelligence, leadership, heart rate variability, personality, stress
Procedia PDF Downloads 226320 The Role of Movement Quality after Osgood-Schlatter Disease in an Amateur Football Player: A Case Study
Authors: D. Pogliana, A. Maso, N. Milani, D. Panzin, S. Rivaroli, J. Konin
Abstract:
This case aims to identify the role of movement quality during the final stage of return to sport (RTS) in a male amateur football player 13 years old after passing the acute phase of the bilateral Osgood-Schlatter disease (OSD). The patient, after a year from passing the acute phase of OSD with the abstention of physical activity, reports bilateral anterior knee pain at the beginning of the football sport activity. Interventions: After the orthopedist check, who recommended physiotherapy sessions for the correction of motor patterns and the isometric reinforcement of the muscles of the quadriceps, the rehabilitation intervention was developed in 7 weeks through 14 sessions of neuro-motor training (NMT) with a frequency of two weekly sessions and six sessions of muscle-strengthening with a frequency of one weekly session. The sessions of NMT were carried out through free body exercises (or with overloads) with visual bio-feedback with the help of two cameras (one with anterior vision and one with lateral vision of the subject) and a big touch screen. The aim of these sessions of NMT was to modify the dysfunctional motor patterns evaluated by the 2D motion analysis test. The test was carried out at the beginning and at the end of the rehabilitation course and included five movements: single-leg squat (SLS), drop jump (DJ), single-leg hop (SLH), lateral shuffle (LS), and change of direction (COD). Each of these movements was evaluated through the video analysis of dynamic valgus knee, pelvic tilt, trunk control, shock absorption, and motor strategy. A free image analysis software (Kinovea) was then used to calculate scores. Results: Baseline assessment of the subject showed a total score of 59% on the right limb and 64% on the left limb (considering an optimal score above 85%) with large deficits in shock absorption capabilities, the presence of dynamic valgus knee, and dysfunctional motor strategies defined “quadriceps dominant.” After six weeks of training, the subject achieved a total score of 80% on the right limb and 86% on the left limb, with significant improvements in shock absorption capabilities, the presence of dynamic knee valgus, and the employment of more hip-oriented motor strategies on both lower limbs. The improvements shown in dynamic knee valgus, greater hip-oriented motor strategies, and improved shock absorption identified through six weeks of the NMT program can help a teenager amateur football player to manage the anterior knee pain during sports activity. In conclusion, NMT was a good choice to help a 13 years old male amateur football player to return to performance without pain after OSD and can also be used with all this type of athletes of the other teams' sports.Keywords: movement analysis, neuro-motor training, knee pain, movement strategies
Procedia PDF Downloads 133319 Traditional Lifestyles of the 'Mbuti' Indigenous Communities and the Relationship with the Preservation of Natural Resources in the Landscape of the Okapi Wildlife Reserve in a Context of Socio-cultural Upheaval, Democratic Republic of Congo
Authors: Chales Mumbere Musavandalo, Lucie B. Mugherwa, Gloire Kayitoghera Mulondi, Naanson Bweya, Muyisa Musongora, Francis Lelo Nzuzi
Abstract:
The landscape of the Okapi Wildlife Reserve in the Democratic Republic of Congo harbors a large community of Mbuti indigenous peoples, often described as the guardians of nature. Living in and off the forest has long been a sustainable strategy for preserving natural resources. This strategy, seen as a form of eco-responsible citizenship, draws upon ethnobotanical knowledge passed down through generations. However, these indigenous communities are facing socio-cultural upheaval, which impacts their traditional way of life. This study aims to assess the relationship between the Mbuti indigenous people’s way of life and the preservation of the Okapi Wildlife Reserve. The study was conducted under the assumption that, despite socio-cultural upheavals, the forest and its resources remain central to the Mbuti way of life. The study was conducted in six encampments, three of which were located inside the forest and two in the anthropized zone. The methodological approach initially involved group interviews in six Mbuti encampments. The objective of these interviews was to determine how these people perceive the various services provided by the forest and the resources obtained from this habitat. The technique of using pebbles was adopted to adapt the exercise of weighting services and resources to the understanding of these people. Subsequently, the study carried out ethnobotanical surveys to identify the wood resources frequently used by these communities. This survey was completed in third position by a transect inventory of 1000 m length and 25 m width in order to enhance the understanding of the abundance of these resources around the camps. Two transects were installed in each camp to carry out this inventory. Traditionally, the Mbuti communities sustain their livelihood through hunting, fishing, gathering for self-consumption, and basketry. The Manniophyton fulvum-based net remains the main hunting tool. The primary forest and the swamp are two habitats from which these peoples derive the majority of their resources. However, with the arrival of the Bantu people, who introduced agriculture based on cocoa production, the Mbuti communities started providing services to the Bantu in the form of labor and field guarding. This cultural symbiosis between Mbute and Bantu has also led to non-traditional practices, such as the use of hunting rifles instead of nets and fishing nets instead of creels. The socio-economic and ecological environment in which Mbuti communities live is changing rapidly, including the resources they depend on. By incorporating the time factor into their perception of ecosystem services, only their future (p-value = 0, 0,121), the provision of wood for energy (p-value = 0,1976), and construction (p-value = 0,2548) would be closely associated with the forest in their future. For other services, such as food supply, medicine, and hunting, adaptation to Bantu customs is conceivable. Additionally, the abundance of wood used by the Mbuti people has been high around encampments located in intact forests and low in those in anthropized areas. The traditional way of life of the Mbuti communities is influenced by the cultural symbiosis, reflected in their habits and the availability of resources. The land tenure security of Mbuti areas is crucial to preserve their tradition and forest biodiversity. Conservation efforts in the Okapi Wildlife Reserve must consider this cultural dynamism and promote positive values for the flagship species. The oversight of subsistence hunting is imperative to curtail the transition of these communities to poaching.Keywords: traditional life, conservation, Indigenous people, cultural symbiosis, forest
Procedia PDF Downloads 59318 The Distribution of Prevalent Supplemental Nutrition Assistance Program-Authorized Food Store Formats Differ by U.S. Region and Rurality: Implications for Food Access and Obesity Linkages
Authors: Bailey Houghtaling, Elena Serrano, Vivica Kraak, Samantha Harden, George Davis, Sarah Misyak
Abstract:
United States (U.S.) Department of Agriculture Supplemental Nutrition Assistance Program (SNAP) participants are low-income Americans receiving federal dollars for supplemental food and beverage purchases. Participants use a variety of (traditional/non-traditional) SNAP-authorized stores for household dietary purchases - also representing food access points for all Americans. Importantly consumers' food and beverage purchases from non-traditional store formats tend to be higher in saturated fats, added sugars, and sodium when compared to purchases from traditional (e.g., grocery/supermarket) formats. Overconsumption of energy-dense and low-nutrient food and beverage products contribute to high obesity rates and adverse health outcomes that differ in severity among urban/rural U.S. locations and high/low-income populations. Little is known about the SNAP-authorized food store format landscape nationally, regionally, or by urban-rural status, as traditional formats are currently used as the gold standard in food access research. This research utilized publicly available U.S. databases to fill this large literature gap and to provide insight into modes of food access for vulnerable U.S. populations: (1) SNAP Retailer Locator which provides a list of all authorized food stores in the U.S., and; (2) Rural-Urban Continuum Codes (RUCC) that categorize U.S. counties as urban (RUCC 1-3) or rural (RUCC 4-9). Frequencies were determined for the highest occurring food store formats nationally and within two regionally diverse U.S. states – Virginia in the east and California in the west. Store format codes were assigned (e.g., grocery, drug, convenience, mass merchandiser, supercenter, dollar, club, or other). RUCC was applied to investigate state-level differences in urbanity-rurality regarding prevalent food store formats and Chi Square test of independence was used to determine if food store format distributions significantly (p < 0.05) differed by region or rurality. The resulting research sample that represented highly prevalent SNAP-authorized food stores nationally included 41.25% of all SNAP stores in the U.S. (N=257,839), comprised primarily of convenience formats (31.94%) followed by dollar (25.58%), drug (19.24%), traditional (10.87%), supercenter (6.85%), mass merchandiser (1.62%), non-food store or restaurant (1.81%), and club formats (1.09%). Results also indicated that the distribution of prevalent SNAP-authorized formats significantly differed by state. California had a lower proportion of traditional (9.96%) and a higher proportion of drug (28.92%) formats than Virginia- 11.55% and 19.97%, respectively (p < 0.001). Virginia also had a higher proportion of dollar formats (26.11%) when compared to California (10.64%) (p < 0.001). Significant differences were also observed for rurality variables (p < 0.001). Prominently, rural Virginia had a significantly higher proportion of dollar formats (41.71%) when compared to urban Virginia (21.78%) and rural California (21.21%). Non-traditional SNAP-authorized formats are highly prevalent and significantly differ in distribution by U.S. region and rurality. The largest proportional difference was observed for dollar formats where the least nutritious consumer purchases are documented in the literature. Researchers/practitioners should investigate non-traditional food stores at the local level using these research findings and similar applied methodologies to determine how access to various store formats impact obesity prevalence. For example, dollar stores may be prime targets for interventions to enhance nutritious consumer purchases in rural Virginia while targeting drug formats in California may be more appropriate.Keywords: food access, food store format, nutrition interventions, SNAP consumers
Procedia PDF Downloads 141317 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulationsKeywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow
Procedia PDF Downloads 502316 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 146315 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 10314 Methodology for the Determination of Triterpenic Compounds in Apple Extracts
Authors: Mindaugas Liaudanskas, Darius Kviklys, Kristina Zymonė, Raimondas Raudonis, Jonas Viškelis, Norbertas Uselis, Pranas Viškelis, Valdimaras Janulis
Abstract:
Apples are among the most commonly consumed fruits in the world. Based on data from the year 2014, approximately 84.63 million tons of apples are grown per annum. Apples are widely used in food industry to produce various products and drinks (juice, wine, and cider); they are also used unprocessed. Apples in human diet are an important source of different groups of biological active compounds that can positively contribute to the prevention of various diseases. They are a source of various biologically active substances – especially vitamins, organic acids, micro- and macro-elements, pectins, and phenolic, triterpenic, and other compounds. Triterpenic compounds, which are characterized by versatile biological activity, are the biologically active compounds found in apples that are among the most promising and most significant for human health. A specific analytical procedure including sample preparation and High Performance Liquid Chromatography (HPLC) analysis was developed, optimized, and validated for the detection of triterpenic compounds in the samples of different apples, their peels, and flesh from widespread apple cultivars 'Aldas', 'Auksis', 'Connel Red', 'Ligol', 'Lodel', and 'Rajka' grown in Lithuanian climatic conditions. The conditions for triterpenic compound extraction were optimized: the solvent of the extraction was 100% (v/v) acetone, and the extraction was performed in an ultrasound bath for 10 min. Isocratic elution (the eluents ratio being 88% (solvent A) and 12% (solvent B)) for a rapid separation of triterpenic compounds was performed. The validation of the methodology was performed on the basis of the ICH recommendations. The following characteristics of validation were evaluated: the selectivity of the method (specificity), precision, the detection and quantitation limits of the analytes, and linearity. The obtained parameters values confirm suitability of methodology to perform analysis of triterpenic compounds. Using the optimised and validated HPLC technique, four triterpenic compounds were separated and identified, and their specificity was confirmed. These compounds were corosolic acid, betulinic acid, oleanolic acid, and ursolic acid. Ursolic acid was the dominant compound in all the tested apple samples. The detected amount of betulinic acid was the lowest of all the identified triterpenic compounds. The greatest amounts of triterpenic compounds were detected in whole apple and apple peel samples of the 'Lodel' cultivar, and thus apples and apple extracts of this cultivar are potentially valuable for use in medical practice, for the prevention of various diseases, for adjunct therapy, for the isolation of individual compounds with a specific biological effect, and for the development and production of dietary supplements and functional food enriched in biologically active compounds. Acknowledgements. This work was supported by a grant from the Research Council of Lithuania, project No. MIP-17-8.Keywords: apples, HPLC, triterpenic compounds, validation
Procedia PDF Downloads 173313 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 262312 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences
Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur
Abstract:
In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.Keywords: decision making, endowment effect, pricing, loss aversion, loss attention
Procedia PDF Downloads 345311 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency
Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar
Abstract:
In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index
Procedia PDF Downloads 28310 Use of Extended Conversation to Boost Vocabulary Knowledge and Soft Skills in English for Employment Classes
Authors: James G. Matthew, Seonmin Huh, Frank X. Bennett
Abstract:
English for Specific Purposes, ESP, aims to equip learners with necessary English language skills. Many ESP programs address language skills for job performance, including reading job related documents and oral proficiency. Within ESP is English for occupational purposes, EOP, which centers around developing communicative competence for the globalized workplace. Many ESP and EOP courses lack the content needed to assist students to progress at work, resulting in the need to create lexical compilation for different professions. It is important to teach communicative competence and soft skills for real job-related problem situations and address the complexities of the real world to help students to be successful in their professions. ESP and EOP research is therefore trying to balance both profession-specific educational contents as well as international multi-disciplinary language skills for the globalized workforce. The current study will build upon the existing discussion by developing pedagogy to assist students in their career through developing a strong practical command of relevant English vocabulary. Our research question focuses on the pedagogy two professors incorporated in their English for employment courses. The current study is a qualitative case study on the modes of teaching delivery for EOP in South Korea. Two foreign professors teaching at two different universities in South Korea volunteered for the study to explore their teaching practices. Both professors’ curriculums included the components of employment-related concept vocabulary, business presentations, CV/resume and cover letter preparation, and job interview preparation. All the pre-made recorded video lectures, live online class sessions with students, teachers’ lesson plans, teachers’ class materials, students’ assignments, and midterm and finals video conferences were collected for data analysis. The study then focused on unpacking representative patterns in their teaching methods. The professors used their strengths as native speakers to extend the class discussion from narrow and restricted conversations to giving students broader opportunities to practice authentic English conversation. The methods of teaching utilized three main steps to extend the conversation. Firstly, students were taught concept vocabulary. Secondly, the vocabulary was then combined in speaking activities where students had to solve scenarios, and the students were required to expand on the given forms of words and language expressions. Lastly, the students had conversations in English, using the language learnt. The conversations observed in both classes were those of authentic, expanded English communication and this way of expanding concept vocabulary lessons into extended conversation is one representative pedagogical approach that both professors took. Extended English conversation, therefore, is crucial for EOP education.Keywords: concept vocabulary, english as a foreign language, english for employment, extended conversation
Procedia PDF Downloads 92309 Determination of the Phytochemicals Composition and Pharmacokinetics of whole Coffee Fruit Caffeine Extract by Liquid Chromatography-Tandem Mass Spectrometry
Authors: Boris Nemzer, Nebiyu Abshiru, Z. B. Pietrzkowski
Abstract:
Coffee cherry is one of the most ubiquitous agricultural commodities which possess nutritional and human health beneficial properties. Between the two most widely used coffee cherries Coffea arabica (Arabica) and Coffea canephora (Robusta), Coffea arabica remains superior due to its sensory properties and, therefore, remains in great demand in the global coffee market. In this study, the phytochemical contents and pharmacokinetics of Coffeeberry® Energy (CBE), a commercially available Arabica whole coffee fruit caffeine extract, are investigated. For phytochemical screening, 20 mg of CBE was dissolved in an aqueous methanol solution for analysis by mass spectrometry (MS). Quantification of caffeine and chlorogenic acids (CGAs) contents of CBE was performed using HPLC. For the bioavailability study, serum samples were collected from human subjects before and after 1, 2 and 3 h post-ingestion of 150mg CBE extract. Protein precipitation and extraction were carried out using methanol. Identification of compounds was performed using an untargeted metabolomic approach on Q-Exactive Orbitrap MS coupled to reversed-phase chromatography. Data processing was performed using Thermo Scientific Compound Discover 3.3 software. Phytochemical screening identified a total of 170 compounds, including organic acids, phenolic acids, CGAs, diterpenoids and hydroxytryptamine. Caffeine & CGAs make up more than, respectively, 70% & 9% of the total CBE composition. For serum samples, a total of 82 metabolites representing 32 caffeine- and 50 phenolic-derived metabolites were identified. Volcano plot analysis revealed 32 differential metabolites (24 caffeine- and 8 phenolic-derived) that showed an increase in serum level post-CBE dosing. Caffeine, uric acid, and trimethyluric acid isomers exhibited 4- to 10-fold increase in serum abundance post-dosing. 7-Methyluric acid, 1,7-dimethyluric acid, paraxanthine and theophylline exhibited a minimum of 1.5-fold increase in serum level. Among the phenolic-derived metabolites, iso-feruloyl quinic acid isomers (3-, 4- and 5-iFQA) showed the highest increase in serum level. These compounds were essentially absent in serum collected before dosage. More interestingly, the iFQA isomers were not originally present in the CBE extract, as our phytochemical screen did not identify these compounds. This suggests the potential formation of the isomers during the digestion and absorption processes. Pharmacokinetics parameters (Cmax, Tmax and AUC0-3h) of caffeine- and phenolic-derived metabolites were also investigated. Caffeine was rapidly absorbed, reaching a maximum concentration (Cmax) of 10.95 µg/ml in just 1 hour. Thereafter, caffeine level steadily dropped from the peak level, although it did not return to baseline within the 3-hour dosing period. The disappearance of caffeine from circulation was mirrored by the rise in the concentration of its methylxanthine metabolites. Similarly, serum concentration of iFQA isomers steadily increased, reaching maximum (Cmax: 3-iFQA, 1.54 ng/ml; 4-iFQA, 2.47 ng/ml; 5-iFQA, 2.91 ng/ml) at tmax of 1.5 hours. The isomers remained well above the baseline during the 3-hour dosing period, allowing them to remain in circulation long enough for absorption into the body. Overall, the current study provides evidence of the potential health benefits of a uniquely formulated whole coffee fruit product. Consumption of this product resulted in a distinct serum profile of bioactive compounds, as demonstrated by the more than 32 metabolites that exhibited a significant change in systemic exposure.Keywords: phytochemicals, mass spectrometry, pharmacokinetics, differential metabolites, chlorogenic acids
Procedia PDF Downloads 68308 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 42307 Improving Online Learning Engagement through a Kid-Teach-Kid Approach for High School Students during the Pandemic
Authors: Alexander Huang
Abstract:
Online learning sessions have become an indispensable complement to in-classroom-learning sessions in the past two years due to the emergence of Covid-19. Due to social distance requirements, many courses and interaction-intensive sessions, ranging from music classes to debate camps, are online. However, online learning imposes a significant challenge for engaging students effectively during the learning sessions. To resolve this problem, Project PWR, a non-profit organization formed by high school students, developed an online kid-teach-kid learning environment to boost students' learning interests and further improve students’ engagement during online learning. Fundamentally, the kid-teach-kid learning model creates an affinity space to form learning groups, where like-minded peers can learn and teach their interests. The role of the teacher can also help a kid identify the instructional task and set the rules and procedures for the activities. The approach also structures initial discussions to reveal a range of ideas, similar experiences, thinking processes, language use, and lower student-to-teacher ratio, which become enriched online learning experiences for upcoming lessons. In such a manner, a kid can practice both the teacher role and the student role to accumulate experiences on how to convey ideas and questions over the online session more efficiently and effectively. In this research work, we conducted two case studies involving a 3D-Design course and a Speech and Debate course taught by high-school kids. Through Project PWR, a kid first needs to design the course syllabus based on a provided template to become a student-teacher. Then, the Project PWR academic committee evaluates the syllabus and offers comments and suggestions for changes. Upon the approval of a syllabus, an experienced and voluntarily adult mentor is assigned to interview the student-teacher and monitor the lectures' progress. Student-teachers construct a comprehensive final evaluation for their students, which they grade at the end of the course. Moreover, each course requires conducting midterm and final evaluations through a set of surveyed replies provided by students to assess the student-teacher’s performance. The uniqueness of Project PWR lies in its established kid-teach-kids affinity space. Our research results showed that Project PWR could create a closed-loop system where a student can help a teacher improve and vice versa, thus improving the overall students’ engagement. As a result, Project PWR’s approach can train teachers and students to become better online learners and give them a solid understanding of what to prepare for and what to expect from future online classes. The kid-teach-kid learning model can significantly improve students' engagement in the online courses through the Project PWR to effectively supplement the traditional teacher-centric model that the Covid-19 pandemic has impacted substantially. Project PWR enables kids to share their interests and bond with one another, making the online learning environment effective and promoting positive and effective personal online one-on-one interactions.Keywords: kid-teach-kid, affinity space, online learning, engagement, student-teacher
Procedia PDF Downloads 142306 A Sustainable Training and Feedback Model for Developing the Teaching Capabilities of Sessional Academic Staff
Authors: Nirmani Wijenayake, Louise Lutze-Mann, Lucy Jo, John Wilson, Vivian Yeung, Dean Lovett, Kim Snepvangers
Abstract:
Sessional academic staff at universities have the most influence and impact on student learning, engagement, and experience as they have the most direct contact with undergraduate students. A blended technology-enhanced program was created for the development and support of sessional staff to ensure adequate training is provided to deliver quality educational outcomes for the students. This program combines innovative mixed media educational modules, a peer-driven support forum, and face-to-face workshops to provide a comprehensive training and support package for staff. Additionally, the program encourages the development of learning communities and peer mentoring among the sessional staff to enhance their support system. In 2018, the program was piloted on 100 sessional staff in the School of Biotechnology and Biomolecular Sciences to evaluate the effectiveness of this model. As part of the program, rotoscope animations were developed to showcase ‘typical’ interactions between staff and students. These were designed around communication, confidence building, consistency in grading, feedback, diversity awareness, and mental health and wellbeing. When surveyed, 86% of sessional staff found these animations to be helpful in their teaching. An online platform (Moodle) was set up to disseminate educational resources and teaching tips, to host a discussion forum for peer-to-peer communication and to increase critical thinking and problem-solving skills through scenario-based lessons. The learning analytics from these lessons were essential in identifying difficulties faced by sessional staff to further develop supporting workshops to improve outcomes related to teaching. The face-to-face professional development workshops were run by expert guest speakers on topics such as cultural diversity, stress and anxiety, LGBTIQ and student engagement. All the attendees of the workshops found them to be useful and 88% said they felt these workshops increase interaction with their peers and built a sense of community. The final component of the program was to use an adaptive e-learning platform to gather feedback from the students on sessional staff teaching twice during the semester. The initial feedback provides sessional staff with enough time to reflect on their teaching and adjust their performance if necessary, to improve the student experience. The feedback from students and the sessional staff on this model has been extremely positive. The training equips the sessional staff with knowledge and insights which can provide students with an exceptional learning environment. This program is designed in a flexible and scalable manner so that other faculties or institutions could adapt components for their own training. It is anticipated that the training and support would help to build the next generation of educators who will directly impact the educational experience of students.Keywords: designing effective instruction, enhancing student learning, implementing effective strategies, professional development
Procedia PDF Downloads 128305 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 265304 Teaching Turn-Taking Rules and Pragmatic Principles to Empower EFL Students and Enhance Their Learning in Speaking Modules
Authors: O. F. Elkommos
Abstract:
Teaching and learning EFL speaking modules is one of the most challenging productive modules for both instructors and learners. In a student-centered interactive communicative language teaching approach, learners and instructors should be aware of the fact that the target language must be taught as/for communication. The student must be empowered by tools that will work on more than one level of their communicative competence. Communicative learning will need a teaching and learning methodology that will address the goal. Teaching turn-taking rules, pragmatic principles and speech acts will enhance students' sociolinguistic competence, strategic competence together with discourse competence. Sociolinguistic competence entails the mastering of speech act conventions and illocutionary acts of refusing, agreeing/disagreeing; emotive acts like, thanking, apologizing, inviting, offering; directives like, ordering, requesting, advising, and hinting, among others. Strategic competence includes enlightening students’ consciousness of the various particular turn-taking systemic rules of organizing techniques of opening and closing conversation, adjacency pairs, interrupting, back-channeling, asking for/giving opinion, agreeing/disagreeing, using natural fillers for pauses, gaps, speaker select, self-select, and silence among others. Students will have the tools to manage a conversation. Students are engaged in opportunities of experiencing the natural language not as a mere extra student talking time but rather an empowerment of knowing and using the strategies. They will have the component items they need to use as well as the opportunity to communicate in the target language using topics of their interest and choice. This enhances students' communicative abilities. Available websites and textbooks now use one or more of these tools of turn-taking or pragmatics. These will be students' support in self-study in their independent learning study hours. This will be their reinforcement practice on e-Learning interactive activities. The students' target is to be able to communicate the intended meaning to an addressee that is in turn able to infer that intended meaning. The combination of these tools will be assertive and encouraging to the student to beat the struggle with what to say, how to say it, and when to say it. Teaching the rules, principles and techniques is an act of awareness raising method engaging students in activities that will lead to their pragmatic discourse competence. The aim of the paper is to show how the suggested pragmatic model will empower students with tools and systems that would support their learning. Supporting students with turn taking rules, speech act theory, applying both to texts and practical analysis and using it in speaking classes empowers students’ pragmatic discourse competence and assists them to understand language and its context. They become more spontaneous and ready to learn the discourse pragmatic dimension of the speaking techniques and suitable content. Students showed a better performance and a good motivation to learn. The model is therefore suggested for speaking modules in EFL classes.Keywords: communicative competence, EFL, empowering learners, enhance learning, speech acts, teaching speaking, turn taking, learner centred, pragmatics
Procedia PDF Downloads 176303 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis
Authors: Songdi Li, Louise Spry, Tony Woodall
Abstract:
Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program
Procedia PDF Downloads 128302 A Novel Nanocomposite Membrane Designed for the Treatment of Oil/Gas Produced Water
Authors: Zhaoyang Liu, Detao Qin, Darren Delai Sun
Abstract:
The onshore production of oil and gas (for example, shale gas) generates large quantities of wastewater, referred to be ‘produced water’, which contains high contents of oils and salts. The direct discharge of produced water, if not appropriately treated, can be toxic to the environment and human health. Membrane filtration has been deemed as an environmental-friendly and cost-effective technology for treating oily wastewater. However, conventional polymeric membranes have their drawbacks of either low salt rejection rate or high membrane fouling tendency when treating oily wastewater. Recent years, forward osmosis (FO) membrane filtration has emerged as a promising technology with its unique advantages of low operation pressure and less membrane fouling tendency. However, until now there is still no report about FO membranes specially designed and fabricated for treating the oily and salty produced water. In this study, a novel nanocomposite FO membrane was developed specially for treating oil- and salt-polluted produced water. By leveraging the recent advance of nanomaterials and nanotechnology, this nanocomposite FO membrane was designed to be made of double layers: an underwater oleophobic selective layer on top of a nanomaterial infused polymeric support layer. Wherein, graphene oxide (GO) nanosheets were selected to add into the polymeric support layer because adding GO nanosheets can optimize the pore structures of the support layer, thus potentially leading to high water flux for FO membranes. In addition, polyvinyl alcohol (PVA) hydrogel was selected as the selective layer because hydrated and chemically-crosslinked PVA hydrogel is capable of simultaneously rejecting oil and salt. After nanocomposite FO membranes were fabricated, the membrane structures were systematically characterized with the instruments of TEM, FESEM, XRD, ATR-FTIR, surface zeta-potential and Contact angles (CA). The membrane performances for treating produced waters were tested with the instruments of TOC, COD and Ion chromatography. The working mechanism of this new membrane was also analyzed. Very promising experimental results have been obtained. The incorporation of GO nanosheets can reduce internal concentration polarization (ICP) effect in the polymeric support layer. The structural parameter (S value) of the new FO membrane is reduced by 23% from 265 ± 31 μm to 205 ± 23 μm. The membrane tortuosity (τ value) is decreased by 20% from 2.55 ± 0.19 to 2.02 ± 0.13 μm, which contributes to the decrease of S value. Moreover, the highly-hydrophilic and chemically-cross-linked hydrogel selective layer present high antifouling property under saline oil/water emulsions. Compared with commercial FO membrane, this new FO membrane possesses three times higher water flux, higher removal efficiencies for oil (>99.9%) and salts (>99.7% for multivalent ions), and significantly lower membrane fouling tendency (<10%). To our knowledge, this is the first report of a nanocomposite FO membrane with the combined merits of high salt rejection, high oil repellency and high water flux for treating onshore oil/gas produced waters. Due to its outstanding performance and ease of fabrication, this novel nanocomposite FO membrane possesses great application potential in wastewater treatment industry.Keywords: nanocomposite, membrane, polymer, graphene oxide
Procedia PDF Downloads 249301 Integrated Care on Chronic Diseases in Asia-Pacific Countries
Authors: Chang Liu, Hanwen Zhang, Vikash Sharma, Don Eliseo Lucerno-Prisno III, Emmanuel Yujuico, Maulik Chokshi, Prashanthi Krishnakumar, Bach Xuan Tran, Giang Thu Vu, Kamilla Anna Pinter, Shenglan Tang
Abstract:
Background and Aims: Globally, many health systems focus on hospital-based healthcare models targeting acute care and disease treatment, which are not effective in addressing the challenges of ageing populations, chronic conditions, multi-morbidities, and increasingly unhealthy lifestyles. Recently, integrated care programs on chronic diseases have been developed, piloted, and implemented to meet such challenges. However, integrated care programs in the Asia-Pacific region vary in the levels of integration from linkage to coordination to full integration. This study aims to identify and analyze existing cases of integrated care in the Asia-Pacific region and identify the facilitators and barriers in order to improve existing cases and inform future cases. Methods: The study is a comparative study, with a combination approach of desk-based research and key informant interviews. The selected countries included in this study represent a good mix of lower-middle income countries (the Philippines, India, Vietnam, and Fiji), upper-middle income country (China), and high-income country (Singapore) in the Asia-Pacific region. Existing integrated care programs were identified through the scoping review approach. Trigger, history, general design, beneficiaries, and objectors were summarized with barriers and facilitators of integrated care based on key informant interviews. Representative case(s) in each country were selected and comprehensively analyzed through deep-dive case studies. Results: A total of 87 existing integrated care programs on chronic diseases were found in all countries, with 44 in China, 21 in Singapore, 12 in India, 5 in Vietnam, 4 in the Philippines, and 1 in Fiji. 9 representative cases of integrated care were selected for in-depth description and analysis, with 2 in China, the Philippines, and Vietnam, and 1 in Singapore, India, and Fiji. Population aging and the rising chronic disease burden have been identified as key drivers for almost all the six countries. Among the six countries, Singapore has the longest history of integrated care, followed by Fiji, the Philippines, and China, while India and Vietnam have a shorter history of integrated care. Incentives, technologies, education, and performance evaluation would be crucial for developing strategies for implementing future programs and improve already existing programs. Conclusion: Integrated care is important for addressing challenges surrounding the delivery of long-term care. To date, there is an increasing trend of integrated care programs on chronic diseases in the Asia-Pacific region, and all six countries in our study set integrated care as a direction for their health systems transformation.Keywords: integrated healthcare, integrated care delivery, chronic diseases, Asia-Pacific region
Procedia PDF Downloads 135300 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda
Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi
Abstract:
Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.Keywords: land use change, urbanization, Kigali City, Landsat
Procedia PDF Downloads 307299 Upsouth: Digitally Empowering Rangatahi (Youth) and Whaanau (Families) to Build Skills in Critical and Creative Thinking to Achieve More Active Citizenship in Aotearoa New Zealand
Authors: Ayla Hoeta
Abstract:
In a post-colonial Aotearoa New Zealand, solutions by rangatahi (youth) for rangatahi are essential as is civic participation and building economic agency in an increasingly tough economic climate. Upsouth was an online community crowdsourcing platform developed by The Southern Initiative, in collaboration with Itsnoon that provides rangatahi and whānau (family) a safe space to share lived experience, thoughts and ideas about local kaupapa (issues/topics) of importance to them. The target participants were Māori indigenous peoples and Pacifica groups, aged 14 - 21 years. In the Aotearoa New Zealand context, this participant group is not likely to engage in traditional consultation processes despite being an essential constituent in helping shape better local communities, whānau and futures. The Upsouth platform was active for two years from 2018-2019 where it completed 42 callups with 4300+ participants. The web platform collates the ideas, voices, feedback, and content of users around a callup that has been commissioned by a sponsor, such as Auckland Council, Z Energy or Auckland Transport. A callup may be about a pressing challenge in a community such as climate change, a new housing development, homelessness etc. Each callup was funded by the sponsor with Upsouths main point of difference being that participants are given koha (money donation) through digital wallets for their ideas. Depending on the quality of what participants upload, the koha varies between small micropayments and larger payments. This encouraged participants to develop creative and critical thinking - upskilling for future focussed jobs, enterprise and democratic skills while earning pocket money at the same time. Upsouth enables youth-led action and voice, and empowers them to be a part of a reciprocal and creative economy. Rangatahi are encouraged to express themselves culturally, creatively, freely and in a way they are free to choose - for example, spoken word, song, dance, video, drawings, and/or poems. This challenges and changes what is considered acceptable as community engagement feedback by the local government. Many traditional engagement platforms are not as consultative, do not accept diverse types of feedback, nor incentivise this valuable expression of feedback. Upsouth is also empowering for rangatahi, since it allows them the opportunity to express their opinions directly to the government. Upsouth gained national and international recognition for the way it engages with youth: winning the Supreme Award and the Accessibility and Transparency Award at Auckland Council’s 2018 Engagement Awards, becoming a finalist in the 2018 Digital Equity and Accessibility category of International Data Corporation’s Smart City Asia and Pacific Awards. This paper will fully contextualize the challenges of rangatahi and whānau civic engagement in Aotearoa New Zealand and then present a reflective case study of the Upsouth project, with examples from some of the callups. This is intended to form part of the Divided Cities 22 conference New Ground sub-theme as a critical reflection on a design intervention, which was conceived and implemented by the lead author to overcome the post-colonial divisions of Māori, Pacifica and minority ethnic rangatahi in Aotearoa New Zealand.Keywords: rangatahi, youth empowerment, civic engagement, enabling, relating, digital platform, participation
Procedia PDF Downloads 81298 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane
Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu
Abstract:
In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy
Procedia PDF Downloads 227297 Academic Staff Development: A Lever to Address the Challenges of the 21st Century University Classroom
Authors: Severino Machingambi
Abstract:
Most academics entering Higher education as lecturers in South Africa do not have qualifications in Education or teaching. This creates serious problems since they are not sufficiently equipped with pedagogical approaches and theories that inform their facilitation of learning strategies. This, arguably, is one of the reasons why higher education institutions are experiencing high student failure rate. In order to mitigate this problem, it is critical that higher education institutions devise internal academic staff development programmes to capacitate academics with pedagogical skills and competencies so as to enhance the quality of student learning. This paper reported on how the Teaching and Learning Development Centre of a university used design-based research methodology to conceptualise and implement an academic staff development programme for new academics at a university of technology. This approach revolves around the designing, testing and refining of an educational intervention. Design-based research is an important methodology for understanding how, when, and why educational innovations work in practice. The need for a professional development course for academics arose due to the fact that most academics at the university did not have teaching qualifications and many of them were employed straight from industry with little understanding of pedagogical approaches. This paper examines three key aspects of the programme namely, the preliminary phase, the teaching experiment and the retrospective analysis. The preliminary phase is the stage in which the problem identification takes place. The problem that this research sought to address relates to the unsatisfactory academic performance of the majority of the students in the institution. It was therefore hypothesized that the problem could be dealt with by professionalising new academics through engagement in an academic staff development programme. The teaching experiment phase afforded researchers and participants in the programme the opportunity to test and refine the proposed intervention and the design principles upon which it was based. The teaching experiment phase revolved around the testing of the new academics professional development programme. This phase created a platform for researchers and academics in the programme to experiment with various activities and instructional strategies such as case studies, observations, discussions and portfolio building. The teaching experiment phase was followed by the retrospective analysis stage in which the research team looked back and tried to give a trustworthy account of the teaching/learning process that had taken place. A questionnaire and focus group discussions were used to collect data from participants that helped to evaluate the programme and its implementation. One of the findings of this study was that academics joining university really need an academic induction programme that inducts them into the discourse of teaching and learning. The study also revealed that existing academics can be placed on formal study programmes in which they acquire educational qualifications with a view to equip them with useful classroom discourses. The study, therefore, concludes that new and existing academics in universities should be supported through induction programmes and placement on formal studies in teaching and learning so that they are capacitated as facilitators of learning.Keywords: academic staff, pedagogy, programme, staff development
Procedia PDF Downloads 133296 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 47295 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence
Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne
Abstract:
Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience
Procedia PDF Downloads 148294 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys
Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio
Abstract:
Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling
Procedia PDF Downloads 221