Search results for: human performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20273

Search results for: human performance

383 Effect of Two Transactional Instructional Strategies on Primary School Pupils’ Achievement in English Language Vocabulary and Reading Comprehension in Ibadan Metropolis, Nigeria

Authors: Eniola Akande

Abstract:

Introduction: English vocabulary and reading comprehension are core to academic achievement in many school subjects. Deficiency in both accounts for dismal performance in internal and external examinations among primary school pupils in Ibadan Metropolis, Nigeria. Previous studies largely focused on factors influencing pupils’ achievement in English vocabulary and reading comprehension. In spite of what literature has shown, the problem still persists, implying the need for other kinds of intervention. This study was therefore carried out to determine the effect of two transactional strategies Picture Walk (PW) and Know-Want to Learn-Learnt (KWL) on primary four pupils’ achievement in English vocabulary and reading comprehension in Ibadan Metropolis. The moderating effects of gender and learning style were also examined. Methodology: The study was anchored on Rosenblatt’s Transactional Reading and Piaget’s Cognitive Development theories; pretest-posttest control group quasi-experimental design with 3x2x3 factorial matrix was adopted. Six public primary schools were purposively selected based on the availability of qualified English language teachers in Primary Education Studies. Six intact classes (one per school) with a total of 101 primary four pupils (48 males and 53 females) participated. The intact classes were randomly assigned to PW (27), KWL (44) and conventional (30) groups. Instruments used were English Vocabulary (r=0.83), Reading Comprehension (r=0.84) achievement tests, Pupils’ Learning Style Preference Scale (r=0.93) and instructional guides. Treatment lasted six weeks. Data were analysed using the Descriptive statistics, Analysis of Covariance and Bonferroni post-hoc test at 0.05 level of significance. The mean age was 8.86±0.84 years. Result: Treatment had a significant main effect on pupils’ reading comprehension (F(2,82)=3.17), but not on English vocabulary. Participants in KWL obtained the highest post achievement means score in reading comprehension (8.93), followed by PW (8.06) and control (7.21) groups. Pupils’ learning style had a significant main effect on pupils’ achievement in reading comprehension (F(2,82)=4.41), but not on English vocabulary. Pupils with preference for tactile learning style had the highest post achievement mean score in reading comprehension (9.40), followed by the auditory (7.43) and the visual learning style (7.37) groups. Gender had no significant main effect on English vocabulary and reading comprehension. There was no significant two-way interaction effect of treatment and gender on pupils’ achievement in English vocabulary and reading comprehension. The two-way interaction effect of treatment and learning style on pupils’ achievement in reading comprehension was significant (F(4,82)=3.37), in favour of pupils with tactile learning style in PW group. There was no significant two-way interaction effect of gender and learning style on pupils’ achievement in English vocabulary and reading comprehension. The three-way interaction effects were not significant on English vocabulary and reading comprehension. Conclusion: Picture Walk and Know-Want to learn-Learnt instructional strategies were effective in enhancing pupils’ achievement in reading comprehension but not on English vocabulary. Learning style contributed considerably to achievement in reading comprehension but not to English vocabulary. Primary school, English language teachers, should put into consideration pupils’ learning style when adopting both strategies in teaching reading comprehension for improved achievement in the subject.

Keywords: comprehension-based intervention, know-want to learn-learnt, learning style, picture walk, primary school pupils

Procedia PDF Downloads 144
382 Efficient Treatment of Azo Dye Wastewater with Simultaneous Energy Generation by Microbial Fuel Cell

Authors: Soumyadeep Bhaduri, Rahul Ghosh, Rahul Shukla, Manaswini Behera

Abstract:

The textile industry consumes a substantial amount of water throughout the processing and production of textile fabrics. The water eventually turns into wastewater, where it acts as an immense damaging nuisance due to its dye content. Wastewater streams contain a percentage ranging from 2.0% to 50.0% of the total weight of dye used, depending on the dye class. The management of dye effluent in textile industries presents a formidable challenge to global sustainability. The current focus is on implementing wastewater treatment technology that enable the recycling of wastewater, reduce energy usage and offset carbon emissions. Microbial fuel cell (MFC) is a device that utilizes microorganisms as a bio-catalyst to effectively treat wastewater while also producing electricity. The MFC harnesses the chemical energy present in wastewater by oxidizing organic compounds in the anodic chamber and reducing an electron acceptor in the cathodic chamber, thereby generating electricity. This research investigates the potential of MFCs to tackle this challenge of azo dye removal with simultaneously generating electricity. Although MFCs are well-established for wastewater treatment, their application in dye decolorization with concurrent electricity generation remains relatively unexplored. This study aims to address this gap by assessing the effectiveness of MFCs as a sustainable solution for treating wastewater containing azo dyes. By harnessing microorganisms as biocatalysts, MFCs offer a promising avenue for environmentally friendly dye effluent management. The performance of MFCs in treating azo dyes and generating electricity was evaluated by optimizing the Chemical Oxygen Demand (COD) and Hydraulic Retention Time (HRT) of influent. COD and HRT values ranged from 1600 mg/L to 2400 mg/L and 5 to 9 days, respectively. Results showed that the maximum open circuit voltage (OCV) reached 648 mV at a COD of 2400 mg/L and HRT of 5 days. Additionally, maximum COD removal of 98% and maximum color removal of 98.91% were achieved at a COD of 1600 mg/L and HRT of 9 days. Furthermore, the study observed a maximum power density of 19.95 W/m3 at a COD of 2400 mg/L and HRT of 5 days. Electrochemical analysis, including linear sweep voltammetry (LSV), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were done to find out the response current and internal resistance of the system. To optimize pH and dye concentration, pH values were varied from 4 to 10, and dye concentrations ranged from 25 mg/L to 175 mg/L. The highest voltage output of 704 mV was recorded at pH 7, while a dye concentration of 100 mg/L yielded the maximum output of 672 mV. This study demonstrates that MFCs offer an efficient and sustainable solution for treating azo dyes in textile industry wastewater, while concurrently generating electricity. These findings suggest the potential of MFCs to contribute to environmental remediation and sustainable development efforts on a global scale.

Keywords: textile wastewater treatment, microbial fuel cell, renewable energy, sustainable wastewater treatment

Procedia PDF Downloads 23
381 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation

Authors: Fatma Refaat Ahmed, Sally Mohamed Farghaly

Abstract:

Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.

Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities

Procedia PDF Downloads 68
380 The Effect of Emotional Intelligence on Physiological Stress of Managers

Authors: Mikko Salminen, Simo Järvelä, Niklas Ravaja

Abstract:

One of the central models of emotional intelligence (EI) is that of Mayer and Salovey’s, which includes ability to monitor own feelings and emotions and those of others, ability to discriminate different emotions, and to use this information to guide thinking and actions. There is vast amount of previous research where positive links between EI and, for example, leadership successfulness, work outcomes, work wellbeing and organizational climate have been reported. EI has also a role in the effectiveness of work teams, and the effects of EI are especially prominent in jobs requiring emotional labor. Thus, also the organizational context must be taken into account when considering the effects of EI on work outcomes. Based on previous research, it is suggested that EI can also protect managers from the negative consequences of stress. Stress may have many detrimental effects on the manager’s performance in essential work tasks. Previous studies have highlighted the effects of stress on, not only health, but also, for example, on cognitive tasks such as decision-making, which is important in managerial work. The motivation for the current study came from the notion that, unfortunately, many stressed individuals may not be aware of the circumstance; periods of stress-induced physiological arousal may be prolonged if there is not enough time for recovery. To tackle this problem, physiological stress levels of managers were collected using recording of heart rate variability (HRV). The goal was to use this data to provide the managers with feedback on their stress levels. The managers could access this feedback using a www-based learning environment. In the learning environment, in addition to the feedback on stress level and other collected data, also developmental tasks were provided. For example, those with high stress levels were sent instructions for mindfulness exercises. The current study focuses on the relation between the measured physiological stress levels and EI of the managers. In a pilot study, 33 managers from various fields wore the Firstbeat Bodyguard HRV measurement devices for three consecutive days and nights. From the collected HRV data periods (minutes) of stress and recovery were detected using dedicated software. The effects of EI on HRV-calculated stress indexes were studied using Linear Mixed Models procedure in SPSS. There was a statistically significant effect of total EI, defined as an average score of Schutte’s emotional intelligence test, on the percentage of stress minutes during the whole measurement period (p=.025). More stress minutes were detected on those managers who had lower emotional intelligence. It is suggested, that high EI provided managers with better tools to cope with stress. Managing of own emotions helps the manager in controlling possible negative emotions evoked by, e.g., critical feedback or increasing workload. High EI managers may also be more competent in detecting emotions of others, which would lead to smoother interactions and less conflicts. Given the recent trend to different quantified-self applications, it is suggested that monitoring of bio-signals would prove to be a fruitful direction to further develop new tools for managerial and leadership coaching.

Keywords: emotional intelligence, leadership, heart rate variability, personality, stress

Procedia PDF Downloads 226
379 Treatment with Triton-X 100: An Enhancement Approach for Cardboard Bioprocessing

Authors: Ahlam Said Al Azkawi, Nallusamy Sivakumar, Saif Nasser Al Bahri

Abstract:

Diverse approaches and pathways are under development with the determination to develop cellulosic biofuels and other bio-products eventually at commercial scale in “bio-refineries”; however, the key challenge is mainly the high level of complexity in processing the feedstock which is complicated and energy consuming. To overcome the complications in utilizing the naturally occurring lignocellulose biomass, using waste paper as a feedstock for bio-production may solve the problem. Besides being abundant and cheap, bioprocessing of waste paper has evolved in response to the public concern from rising landfill cost from shrinking landfill capacity. Cardboard (CB) is one of the major components of municipal solid waste and one of the most important items to recycle. Although 50-70% of cardboard constitute is known to be cellulose and hemicellulose, the presence of lignin around them cause hydrophobic cross-link which physically obstructs the hydrolysis by rendering it resistant to enzymatic cleavage. Therefore, pretreatment is required to disrupt this resistance and to enhance the exposure of the targeted carbohydrates to the hydrolytic enzymes. Several pretreatment approaches have been explored, and the best ones would be those can influence cellulose conversion rates and hydrolytic enzyme performance with minimal or less cost and downstream processes. One of the promising strategies in this field is the application of surfactants, especially non-ionic surfactants. In this study, triton-X 100 was used as surfactants to treat cardboard prior enzymatic hydrolysis and compare it with acid treatment using 0.1% H2SO4. The effect of the surfactant enhancement was evaluated through its effect on hydrolysis rate in respect to time in addition to evaluating the structural changes and modification by scanning electron microscope (SEM) and X-ray diffraction (XRD) and through compositional analysis. Further work was performed to produce ethanol from CB treated with triton-X 100 via separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). The hydrolysis studies have demonstrated enhancement in saccharification by 35%. After 72 h of hydrolysis, a saccharification rate of 98% was achieved from CB enhanced with triton-X 100, while only 89 of saccharification achieved from acid pre-treated CB. At 120 h, the saccharification % exceeded 100 as reducing sugars continued to increase with time. This enhancement was not supported by any significant changes in the cardboard content as the cellulose, hemicellulose and lignin content remained same after treatment, but obvious structural changes were observed through SEM images. The cellulose fibers were clearly exposed with very less debris and deposits compared to cardboard without triton-X 100. The XRD pattern has also revealed the ability of the surfactant in removing calcium carbonate, a filler found in waste paper known to have negative effect on enzymatic hydrolysis. The cellulose crystallinity without surfactant was 73.18% and reduced to 66.68% rendering it more amorphous and susceptible to enzymatic attack. Triton-X 100 has proved to effectively enhance CB hydrolysis and eventually had positive effect on the ethanol yield via SSF. Treating cardboard with only triton-X 100 was a sufficient treatment to enhance the enzymatic hydrolysis and ethanol production.

Keywords: cardboard, enhancement, ethanol, hydrolysis, treatment, Triton-X 100

Procedia PDF Downloads 152
378 The Role of Movement Quality after Osgood-Schlatter Disease in an Amateur Football Player: A Case Study

Authors: D. Pogliana, A. Maso, N. Milani, D. Panzin, S. Rivaroli, J. Konin

Abstract:

This case aims to identify the role of movement quality during the final stage of return to sport (RTS) in a male amateur football player 13 years old after passing the acute phase of the bilateral Osgood-Schlatter disease (OSD). The patient, after a year from passing the acute phase of OSD with the abstention of physical activity, reports bilateral anterior knee pain at the beginning of the football sport activity. Interventions: After the orthopedist check, who recommended physiotherapy sessions for the correction of motor patterns and the isometric reinforcement of the muscles of the quadriceps, the rehabilitation intervention was developed in 7 weeks through 14 sessions of neuro-motor training (NMT) with a frequency of two weekly sessions and six sessions of muscle-strengthening with a frequency of one weekly session. The sessions of NMT were carried out through free body exercises (or with overloads) with visual bio-feedback with the help of two cameras (one with anterior vision and one with lateral vision of the subject) and a big touch screen. The aim of these sessions of NMT was to modify the dysfunctional motor patterns evaluated by the 2D motion analysis test. The test was carried out at the beginning and at the end of the rehabilitation course and included five movements: single-leg squat (SLS), drop jump (DJ), single-leg hop (SLH), lateral shuffle (LS), and change of direction (COD). Each of these movements was evaluated through the video analysis of dynamic valgus knee, pelvic tilt, trunk control, shock absorption, and motor strategy. A free image analysis software (Kinovea) was then used to calculate scores. Results: Baseline assessment of the subject showed a total score of 59% on the right limb and 64% on the left limb (considering an optimal score above 85%) with large deficits in shock absorption capabilities, the presence of dynamic valgus knee, and dysfunctional motor strategies defined “quadriceps dominant.” After six weeks of training, the subject achieved a total score of 80% on the right limb and 86% on the left limb, with significant improvements in shock absorption capabilities, the presence of dynamic knee valgus, and the employment of more hip-oriented motor strategies on both lower limbs. The improvements shown in dynamic knee valgus, greater hip-oriented motor strategies, and improved shock absorption identified through six weeks of the NMT program can help a teenager amateur football player to manage the anterior knee pain during sports activity. In conclusion, NMT was a good choice to help a 13 years old male amateur football player to return to performance without pain after OSD and can also be used with all this type of athletes of the other teams' sports.

Keywords: movement analysis, neuro-motor training, knee pain, movement strategies

Procedia PDF Downloads 135
377 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 502
376 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 146
375 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 10
374 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 262
373 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 345
372 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency

Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar

Abstract:

In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.

Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index

Procedia PDF Downloads 29
371 Use of Extended Conversation to Boost Vocabulary Knowledge and Soft Skills in English for Employment Classes

Authors: James G. Matthew, Seonmin Huh, Frank X. Bennett

Abstract:

English for Specific Purposes, ESP, aims to equip learners with necessary English language skills. Many ESP programs address language skills for job performance, including reading job related documents and oral proficiency. Within ESP is English for occupational purposes, EOP, which centers around developing communicative competence for the globalized workplace. Many ESP and EOP courses lack the content needed to assist students to progress at work, resulting in the need to create lexical compilation for different professions. It is important to teach communicative competence and soft skills for real job-related problem situations and address the complexities of the real world to help students to be successful in their professions. ESP and EOP research is therefore trying to balance both profession-specific educational contents as well as international multi-disciplinary language skills for the globalized workforce. The current study will build upon the existing discussion by developing pedagogy to assist students in their career through developing a strong practical command of relevant English vocabulary. Our research question focuses on the pedagogy two professors incorporated in their English for employment courses. The current study is a qualitative case study on the modes of teaching delivery for EOP in South Korea. Two foreign professors teaching at two different universities in South Korea volunteered for the study to explore their teaching practices. Both professors’ curriculums included the components of employment-related concept vocabulary, business presentations, CV/resume and cover letter preparation, and job interview preparation. All the pre-made recorded video lectures, live online class sessions with students, teachers’ lesson plans, teachers’ class materials, students’ assignments, and midterm and finals video conferences were collected for data analysis. The study then focused on unpacking representative patterns in their teaching methods. The professors used their strengths as native speakers to extend the class discussion from narrow and restricted conversations to giving students broader opportunities to practice authentic English conversation. The methods of teaching utilized three main steps to extend the conversation. Firstly, students were taught concept vocabulary. Secondly, the vocabulary was then combined in speaking activities where students had to solve scenarios, and the students were required to expand on the given forms of words and language expressions. Lastly, the students had conversations in English, using the language learnt. The conversations observed in both classes were those of authentic, expanded English communication and this way of expanding concept vocabulary lessons into extended conversation is one representative pedagogical approach that both professors took. Extended English conversation, therefore, is crucial for EOP education.

Keywords: concept vocabulary, english as a foreign language, english for employment, extended conversation

Procedia PDF Downloads 92
370 Co-Smoldered Digestate Ash as Additive for Anaerobic Digestion of Berry Fruit Waste: Stability and Enhanced Production Rate

Authors: Arinze Ezieke, Antonio Serrano, William Clarke, Denys Villa-Gomez

Abstract:

Berry cultivation results in discharge of high organic strength putrescible solid waste which potentially contributes to environmental degradation, making it imperative to assess options for its complete management. Anaerobic digestion (AD) could be an ideal option when the target is energy generation; however, due to berry fruit characteristics high carbohydrate composition, the technology could be limited by its high alkalinity requirement which suggests dosing of additives such as buffers and trace elements supplement. Overcoming this limitation in an economically viable way could entail replacement of synthetic additives with recycled by-product waste. Consequently, ash from co-smouldering of high COD characteristic AD digestate and coco-coir could be a promising material to be used to enhance the AD of berry fruit waste, given its characteristic high pH, alkalinity and metal concentrations which is typical of synthetic additives. Therefore, the aim of the research was to evaluate the stability and process performance from the AD of BFW when ash from co-smoldered digestate and coir are supplemented as alkalinity and trace elements (TEs) source. Series of batch experiments were performed to ascertain the necessity for alkalinity addition and to see whether the alkalinity and metals in the co-smouldered digestate ash can provide the necessary buffer and TEs for AD of berry fruit waste. Triplicate assays were performed in batch systems following I/S of 2 (in VS), using serum bottles (160 mL) sealed and placed in a heated room (35±0.5 °C), after creating anaerobic conditions. Control experiment contained inoculum and substrates only, and inoculum, substrate and NaHCO3 for optimal total alkalinity concentration and TEs assays, respectively. Total alkalinity concentration refers to alkalinity of inoculum and the additives. The alkalinity and TE potential of the ash were evaluated by supplementing ash (22.574 g/kg) of equivalent total alkalinity concentration to that of the pre-determined optimal from NaHCO3, and by dosing ash (0.012 – 7.574 g/kg) of varying concentrations of specific essential TEs (Co, Fe, Ni, Se), respectively. The result showed a stable process at all examined conditions. Supplementation of 745 mg/L CaCO3 NaHCO3 resulted to an optimum TAC of 2000 mg/L CaCO3. Equivalent ash supplementation of 22.574 g/kg allowed the achievement of this pre-determined optimum total alkalinity concentration, resulting to a stable process with a 92% increase in the methane production rate (323 versus 168 mL CH4/ (gVS.d)), but a 36% reduction in the cumulative methane production (103 versus 161 mL CH4/gVS). Addition of ashes at incremental dosage as TEs source resulted to a reduction in the Cumulative methane production, with the highest dosage of 7.574 g/kg having the highest effect of -23.5%; however, the seemingly immediate bioavailability of TE at this high dosage allowed for a +15% increase in the methane production rate. With an increased methane production rate, the results demonstrated that the ash at high dosages could be an effective supplementary material for either a buffered or none buffered berry fruit waste AD system.

Keywords: anaerobic digestion, alkalinity, co-smoldered digestate ash, trace elements

Procedia PDF Downloads 122
369 Improving Online Learning Engagement through a Kid-Teach-Kid Approach for High School Students during the Pandemic

Authors: Alexander Huang

Abstract:

Online learning sessions have become an indispensable complement to in-classroom-learning sessions in the past two years due to the emergence of Covid-19. Due to social distance requirements, many courses and interaction-intensive sessions, ranging from music classes to debate camps, are online. However, online learning imposes a significant challenge for engaging students effectively during the learning sessions. To resolve this problem, Project PWR, a non-profit organization formed by high school students, developed an online kid-teach-kid learning environment to boost students' learning interests and further improve students’ engagement during online learning. Fundamentally, the kid-teach-kid learning model creates an affinity space to form learning groups, where like-minded peers can learn and teach their interests. The role of the teacher can also help a kid identify the instructional task and set the rules and procedures for the activities. The approach also structures initial discussions to reveal a range of ideas, similar experiences, thinking processes, language use, and lower student-to-teacher ratio, which become enriched online learning experiences for upcoming lessons. In such a manner, a kid can practice both the teacher role and the student role to accumulate experiences on how to convey ideas and questions over the online session more efficiently and effectively. In this research work, we conducted two case studies involving a 3D-Design course and a Speech and Debate course taught by high-school kids. Through Project PWR, a kid first needs to design the course syllabus based on a provided template to become a student-teacher. Then, the Project PWR academic committee evaluates the syllabus and offers comments and suggestions for changes. Upon the approval of a syllabus, an experienced and voluntarily adult mentor is assigned to interview the student-teacher and monitor the lectures' progress. Student-teachers construct a comprehensive final evaluation for their students, which they grade at the end of the course. Moreover, each course requires conducting midterm and final evaluations through a set of surveyed replies provided by students to assess the student-teacher’s performance. The uniqueness of Project PWR lies in its established kid-teach-kids affinity space. Our research results showed that Project PWR could create a closed-loop system where a student can help a teacher improve and vice versa, thus improving the overall students’ engagement. As a result, Project PWR’s approach can train teachers and students to become better online learners and give them a solid understanding of what to prepare for and what to expect from future online classes. The kid-teach-kid learning model can significantly improve students' engagement in the online courses through the Project PWR to effectively supplement the traditional teacher-centric model that the Covid-19 pandemic has impacted substantially. Project PWR enables kids to share their interests and bond with one another, making the online learning environment effective and promoting positive and effective personal online one-on-one interactions.

Keywords: kid-teach-kid, affinity space, online learning, engagement, student-teacher

Procedia PDF Downloads 142
368 A Sustainable Training and Feedback Model for Developing the Teaching Capabilities of Sessional Academic Staff

Authors: Nirmani Wijenayake, Louise Lutze-Mann, Lucy Jo, John Wilson, Vivian Yeung, Dean Lovett, Kim Snepvangers

Abstract:

Sessional academic staff at universities have the most influence and impact on student learning, engagement, and experience as they have the most direct contact with undergraduate students. A blended technology-enhanced program was created for the development and support of sessional staff to ensure adequate training is provided to deliver quality educational outcomes for the students. This program combines innovative mixed media educational modules, a peer-driven support forum, and face-to-face workshops to provide a comprehensive training and support package for staff. Additionally, the program encourages the development of learning communities and peer mentoring among the sessional staff to enhance their support system. In 2018, the program was piloted on 100 sessional staff in the School of Biotechnology and Biomolecular Sciences to evaluate the effectiveness of this model. As part of the program, rotoscope animations were developed to showcase ‘typical’ interactions between staff and students. These were designed around communication, confidence building, consistency in grading, feedback, diversity awareness, and mental health and wellbeing. When surveyed, 86% of sessional staff found these animations to be helpful in their teaching. An online platform (Moodle) was set up to disseminate educational resources and teaching tips, to host a discussion forum for peer-to-peer communication and to increase critical thinking and problem-solving skills through scenario-based lessons. The learning analytics from these lessons were essential in identifying difficulties faced by sessional staff to further develop supporting workshops to improve outcomes related to teaching. The face-to-face professional development workshops were run by expert guest speakers on topics such as cultural diversity, stress and anxiety, LGBTIQ and student engagement. All the attendees of the workshops found them to be useful and 88% said they felt these workshops increase interaction with their peers and built a sense of community. The final component of the program was to use an adaptive e-learning platform to gather feedback from the students on sessional staff teaching twice during the semester. The initial feedback provides sessional staff with enough time to reflect on their teaching and adjust their performance if necessary, to improve the student experience. The feedback from students and the sessional staff on this model has been extremely positive. The training equips the sessional staff with knowledge and insights which can provide students with an exceptional learning environment. This program is designed in a flexible and scalable manner so that other faculties or institutions could adapt components for their own training. It is anticipated that the training and support would help to build the next generation of educators who will directly impact the educational experience of students.

Keywords: designing effective instruction, enhancing student learning, implementing effective strategies, professional development

Procedia PDF Downloads 128
367 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 359
366 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 259
365 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures

Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju

Abstract:

Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.

Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability

Procedia PDF Downloads 68
364 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 177
363 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 265
362 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 190
361 Teaching Turn-Taking Rules and Pragmatic Principles to Empower EFL Students and Enhance Their Learning in Speaking Modules

Authors: O. F. Elkommos

Abstract:

Teaching and learning EFL speaking modules is one of the most challenging productive modules for both instructors and learners. In a student-centered interactive communicative language teaching approach, learners and instructors should be aware of the fact that the target language must be taught as/for communication. The student must be empowered by tools that will work on more than one level of their communicative competence. Communicative learning will need a teaching and learning methodology that will address the goal. Teaching turn-taking rules, pragmatic principles and speech acts will enhance students' sociolinguistic competence, strategic competence together with discourse competence. Sociolinguistic competence entails the mastering of speech act conventions and illocutionary acts of refusing, agreeing/disagreeing; emotive acts like, thanking, apologizing, inviting, offering; directives like, ordering, requesting, advising, and hinting, among others. Strategic competence includes enlightening students’ consciousness of the various particular turn-taking systemic rules of organizing techniques of opening and closing conversation, adjacency pairs, interrupting, back-channeling, asking for/giving opinion, agreeing/disagreeing, using natural fillers for pauses, gaps, speaker select, self-select, and silence among others. Students will have the tools to manage a conversation. Students are engaged in opportunities of experiencing the natural language not as a mere extra student talking time but rather an empowerment of knowing and using the strategies. They will have the component items they need to use as well as the opportunity to communicate in the target language using topics of their interest and choice. This enhances students' communicative abilities. Available websites and textbooks now use one or more of these tools of turn-taking or pragmatics. These will be students' support in self-study in their independent learning study hours. This will be their reinforcement practice on e-Learning interactive activities. The students' target is to be able to communicate the intended meaning to an addressee that is in turn able to infer that intended meaning. The combination of these tools will be assertive and encouraging to the student to beat the struggle with what to say, how to say it, and when to say it. Teaching the rules, principles and techniques is an act of awareness raising method engaging students in activities that will lead to their pragmatic discourse competence. The aim of the paper is to show how the suggested pragmatic model will empower students with tools and systems that would support their learning. Supporting students with turn taking rules, speech act theory, applying both to texts and practical analysis and using it in speaking classes empowers students’ pragmatic discourse competence and assists them to understand language and its context. They become more spontaneous and ready to learn the discourse pragmatic dimension of the speaking techniques and suitable content. Students showed a better performance and a good motivation to learn. The model is therefore suggested for speaking modules in EFL classes.

Keywords: communicative competence, EFL, empowering learners, enhance learning, speech acts, teaching speaking, turn taking, learner centred, pragmatics

Procedia PDF Downloads 176
360 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis

Authors: Songdi Li, Louise Spry, Tony Woodall

Abstract:

Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.

Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program

Procedia PDF Downloads 128
359 Microfabrication and Non-Invasive Imaging of Porous Osteogenic Structures Using Laser-Assisted Technologies

Authors: Irina Alexandra Paun, Mona Mihailescu, Marian Zamfirescu, Catalin Romeo Luculescu, Adriana Maria Acasandrei, Cosmin Catalin Mustaciosu, Roxana Cristina Popescu, Maria Dinescu

Abstract:

A major concern in bone tissue engineering is to develop complex 3D architectures that mimic the natural cells environment, facilitate the cells growth in a defined manner and allow the flow transport of nutrients and metabolic waste. In particular, porous structures of controlled pore size and positioning are indispensable for growing human-like bone structures. Another concern is to monitor both the structures and the seeded cells with high spatial resolution and without interfering with the cells natural environment. The present approach relies on laser-based technologies employed for fabricating porous biomimetic structures that support the growth of osteoblast-like cells and for their non-invasive 3D imaging. Specifically, the porous structures were built by two photon polymerization –direct writing (2PP_DW) of the commercially available photoresists IL-L780, using the Photonic Professional 3D lithography system. The structures consist of vertical tubes with micrometer-sized heights and diameters, in a honeycomb-like spatial arrangement. These were fabricated by irradiating the IP-L780 photoresist with focused laser pulses with wavelength centered at 780 nm, 120 fs pulse duration and 80 MHz repetition rate. The samples were precisely scanned in 3D by piezo stages. The coarse positioning was done by XY motorized stages. The scanning path was programmed through a writing language (GWL) script developed by Nanoscribe. Following laser irradiation, the unexposed regions of the photoresist were washed out by immersing the samples in the Propylene Glycol Monomethyl Ether Acetate (PGMEA). The porous structures were seeded with osteoblast like MG-63 cells and their osteogenic potential was tested in vitro. The cell-seeded structures were analyzed in 3D using the digital holographic microscopy technique (DHM). DHM is a marker free and high spatial resolution imaging tool, where the hologram acquisition is performed non-invasively i.e. without interfering with the cells natural environment. Following hologram recording, a digital algorithm provided a 3D image of the sample, as well as information about its refractive index, which is correlated with the intracellular content. The axial resolution of the images went down to the nanoscale, while the temporal scales ranged from milliseconds up to hours. The hologram did not involve sample scanning and the whole image was available in one frame recorded going over 200μm field of view. The digital holograms processing provided 3D quantitative information on the porous structures and allowed a quantitative analysis of the cellular response in respect to the porous architectures. The cellular shape and dimensions were found to be influenced by the underlying micro relief. Furthermore, the intracellular content gave evidence on the beneficial role of the porous structures in promoting osteoblast differentiation. In all, the proposed laser-based protocol emerges as a promising tool for the fabrication and non-invasive imaging of porous constructs for bone tissue engineering. Acknowledgments: This work was supported by a grant of the Romanian Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project PN-II-RU-TE-2014-4-2534 (contract 97 from 01/10/2015) and by UEFISCDI PN-II-PT-PCCA no. 6/2012. A part of this work was performed in the CETAL laser facility, supported by the National Program PN 16 47 - LAPLAS IV.

Keywords: biomimetic, holography, laser, osteoblast, two photon polymerization

Procedia PDF Downloads 273
358 Integrated Care on Chronic Diseases in Asia-Pacific Countries

Authors: Chang Liu, Hanwen Zhang, Vikash Sharma, Don Eliseo Lucerno-Prisno III, Emmanuel Yujuico, Maulik Chokshi, Prashanthi Krishnakumar, Bach Xuan Tran, Giang Thu Vu, Kamilla Anna Pinter, Shenglan Tang

Abstract:

Background and Aims: Globally, many health systems focus on hospital-based healthcare models targeting acute care and disease treatment, which are not effective in addressing the challenges of ageing populations, chronic conditions, multi-morbidities, and increasingly unhealthy lifestyles. Recently, integrated care programs on chronic diseases have been developed, piloted, and implemented to meet such challenges. However, integrated care programs in the Asia-Pacific region vary in the levels of integration from linkage to coordination to full integration. This study aims to identify and analyze existing cases of integrated care in the Asia-Pacific region and identify the facilitators and barriers in order to improve existing cases and inform future cases. Methods: The study is a comparative study, with a combination approach of desk-based research and key informant interviews. The selected countries included in this study represent a good mix of lower-middle income countries (the Philippines, India, Vietnam, and Fiji), upper-middle income country (China), and high-income country (Singapore) in the Asia-Pacific region. Existing integrated care programs were identified through the scoping review approach. Trigger, history, general design, beneficiaries, and objectors were summarized with barriers and facilitators of integrated care based on key informant interviews. Representative case(s) in each country were selected and comprehensively analyzed through deep-dive case studies. Results: A total of 87 existing integrated care programs on chronic diseases were found in all countries, with 44 in China, 21 in Singapore, 12 in India, 5 in Vietnam, 4 in the Philippines, and 1 in Fiji. 9 representative cases of integrated care were selected for in-depth description and analysis, with 2 in China, the Philippines, and Vietnam, and 1 in Singapore, India, and Fiji. Population aging and the rising chronic disease burden have been identified as key drivers for almost all the six countries. Among the six countries, Singapore has the longest history of integrated care, followed by Fiji, the Philippines, and China, while India and Vietnam have a shorter history of integrated care. Incentives, technologies, education, and performance evaluation would be crucial for developing strategies for implementing future programs and improve already existing programs. Conclusion: Integrated care is important for addressing challenges surrounding the delivery of long-term care. To date, there is an increasing trend of integrated care programs on chronic diseases in the Asia-Pacific region, and all six countries in our study set integrated care as a direction for their health systems transformation.

Keywords: integrated healthcare, integrated care delivery, chronic diseases, Asia-Pacific region

Procedia PDF Downloads 135
357 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane

Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu

Abstract:

In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).

Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy

Procedia PDF Downloads 227
356 Academic Staff Development: A Lever to Address the Challenges of the 21st Century University Classroom

Authors: Severino Machingambi

Abstract:

Most academics entering Higher education as lecturers in South Africa do not have qualifications in Education or teaching. This creates serious problems since they are not sufficiently equipped with pedagogical approaches and theories that inform their facilitation of learning strategies. This, arguably, is one of the reasons why higher education institutions are experiencing high student failure rate. In order to mitigate this problem, it is critical that higher education institutions devise internal academic staff development programmes to capacitate academics with pedagogical skills and competencies so as to enhance the quality of student learning. This paper reported on how the Teaching and Learning Development Centre of a university used design-based research methodology to conceptualise and implement an academic staff development programme for new academics at a university of technology. This approach revolves around the designing, testing and refining of an educational intervention. Design-based research is an important methodology for understanding how, when, and why educational innovations work in practice. The need for a professional development course for academics arose due to the fact that most academics at the university did not have teaching qualifications and many of them were employed straight from industry with little understanding of pedagogical approaches. This paper examines three key aspects of the programme namely, the preliminary phase, the teaching experiment and the retrospective analysis. The preliminary phase is the stage in which the problem identification takes place. The problem that this research sought to address relates to the unsatisfactory academic performance of the majority of the students in the institution. It was therefore hypothesized that the problem could be dealt with by professionalising new academics through engagement in an academic staff development programme. The teaching experiment phase afforded researchers and participants in the programme the opportunity to test and refine the proposed intervention and the design principles upon which it was based. The teaching experiment phase revolved around the testing of the new academics professional development programme. This phase created a platform for researchers and academics in the programme to experiment with various activities and instructional strategies such as case studies, observations, discussions and portfolio building. The teaching experiment phase was followed by the retrospective analysis stage in which the research team looked back and tried to give a trustworthy account of the teaching/learning process that had taken place. A questionnaire and focus group discussions were used to collect data from participants that helped to evaluate the programme and its implementation. One of the findings of this study was that academics joining university really need an academic induction programme that inducts them into the discourse of teaching and learning. The study also revealed that existing academics can be placed on formal study programmes in which they acquire educational qualifications with a view to equip them with useful classroom discourses. The study, therefore, concludes that new and existing academics in universities should be supported through induction programmes and placement on formal studies in teaching and learning so that they are capacitated as facilitators of learning.

Keywords: academic staff, pedagogy, programme, staff development

Procedia PDF Downloads 133
355 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence

Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne

Abstract:

Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.

Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience

Procedia PDF Downloads 148
354 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes

Authors: Iris Vural Gursel, Andrea Ramirez

Abstract:

Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.

Keywords: biorefinery, economic assessment, lignin conversion, process design

Procedia PDF Downloads 261