Search results for: combining forecasts
141 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study
Authors: J. M. Geyser
Abstract:
Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.Keywords: cost analysis, empowerment, female farmers, organic lamb production
Procedia PDF Downloads 75140 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen
Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García
Abstract:
This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment
Procedia PDF Downloads 303139 Use of a Business Intelligence Software for Interactive Visualization of Data on the Swiss Elite Sports System
Authors: Corinne Zurmuehle, Andreas Christoph Weber
Abstract:
In 2019, the Swiss Federal Institute of Sport Magglingen (SFISM) conducted a mixed-methods study on the Swiss elite sports system, which yielded a large quantity of research data. In a quantitative online survey, 1151 elite sports athletes, 542 coaches, and 102 Performance Directors of national sports federations (NF) have submitted their perceptions of the national support measures of the Swiss elite sports system. These data provide an essential database for the further development of the Swiss elite sports system. The results were published in a report presenting the results divided into 40 Olympic summer and 14 winter sports (Olympic classification). The authors of this paper assume that, in practice, this division is too unspecific to assess where further measures would be needed. The aim of this paper is to find appropriate parameters for data visualization in order to identify disparities in sports promotion that allow an assessment of where further interventions by Swiss Olympic (NF umbrella organization) are required. Method: First, the variable 'salary earned from sport' was defined as a variable to measure the impact of elite sports promotion. This variable was chosen as a measure as it represents an important indicator for the professionalization of elite athletes and therefore reflects national level sports promotion measures applied by Swiss Olympic. Afterwards, the variable salary was tested with regard to the correlation between Olympic classification [a], calculating the Eta coefficient. To estimate the appropriate parameters for data visualization, the correlation between salary and four further parameters was analyzed by calculating the Eta coefficient: [a] sport; [b] prioritization (from 1 to 5) of the sports by Swiss Olympic; [c] gender; [d] employment level in sports. Results & Discussion: The analyses reveal a very small correlation between salary and Olympic classification (ɳ² = .011, p = .005). Gender demonstrates an even small correlation (ɳ² = .006, p = .014). The parameter prioritization was correlating with small effect (ɳ² = .017, p = .001) as did employment level (ɳ² = .028, p < .001). The highest correlation was identified by the parameter sport with a moderate effect (ɳ² = .075, p = .047). The analyses show that the disparities in sports promotion cannot be determined by a particular parameter but presumably explained by a combination of several parameters. We argue that the possibility of combining parameters for data visualization should be enabled when the analysis is provided to Swiss Olympic for further strategic decision-making. However, the inclusion of multiple parameters massively multiplies the number of graphs and is therefore not suitable for practical use. Therefore, we suggest to apply interactive dashboards for data visualization using Business Intelligence Software. Practical & Theoretical Contribution: This contribution provides the first attempt to use Business Intelligence Software for strategic decision-making in national level sports regarding the prioritization of national resources for sports and athletes. This allows to set specific parameters with a significant effect as filters. By using filters, parameters can be combined and compared against each other and set individually for each strategic decision.Keywords: data visualization, business intelligence, Swiss elite sports system, strategic decision-making
Procedia PDF Downloads 90138 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures
Authors: Haytam Kasem
Abstract:
The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model
Procedia PDF Downloads 239137 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients
Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic
Abstract:
Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.Keywords: pain score, functionality scales, low back pain, lumbar
Procedia PDF Downloads 235136 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 103135 Speech Acts of Selected Classroom Encounters: Analyzing the Speech Acts of a Career Technology Lesson
Authors: Michael Amankwaa Adu
Abstract:
Effective communication in the classroom plays a vital role in ensuring successful teaching and learning. In particular, the types of language and speech acts teachers use shape classroom interactions and influence student engagement. This study aims to analyze the speech acts employed by a Career Technology teacher in a junior high school. While much research has focused on speech acts in language classrooms, less attention has been given to how these acts operate in non-language subject areas like technical education. The study explores how different types of speech acts—directives, assertives, expressives, and commissives—are used during three classroom encounters: lesson introduction, content delivery, and classroom management. This research seeks to fill the gap in understanding how teachers of non-language subjects use speech acts to manage classroom dynamics and facilitate learning. The study employs a mixed-methods design, combining qualitative and quantitative approaches. Data was collected through direct classroom observation and audio recordings of a one-hour Career Technology lesson. The transcriptions of the lesson were analyzed using John Searle’s taxonomy of speech acts, classifying the teacher’s utterances into directives, assertives, expressives, and commissives. Results show that directives were the most frequently used speech act, accounting for 59.3% of the teacher's utterances. These speech acts were essential in guiding student behavior, giving instructions, and maintaining classroom control. Assertives made up 20.4% of the speech acts, primarily used for stating facts and reinforcing content. Expressives, at 14.2%, expressed emotions such as approval or frustration, helping to manage the emotional atmosphere of the classroom. Commissives were the least used, representing 6.2% of the speech acts, often used to set expectations or outline future actions. No declarations were observed during the lesson. The findings of this study reveal the critical role that speech acts play in managing classroom behavior and delivering content in technical subjects. Directives were crucial for ensuring students followed instructions and completed tasks, while assertives helped in reinforcing lesson objectives. Expressives contributed to motivating or disciplining students, and commissives, though less frequent, helped set clear expectations for students’ future actions. The absence of declarations suggests that the teacher prioritized guiding students over making formal pronouncements. These insights can inform teaching strategies across various subject areas, demonstrating that a diverse use of speech acts can create a balanced and interactive learning environment. This study contributes to the growing field of pragmatics in education and offers practical recommendations for educators, particularly in non-language classrooms, on how to utilize speech acts to enhance both classroom management and student engagement.Keywords: classroom interaction, pragmatics, speech acts, teacher communication, career technology
Procedia PDF Downloads 22134 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst
Authors: Totsaporn Suwannaruang, Kitirote Wantala
Abstract:
The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation
Procedia PDF Downloads 158133 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 303132 3D Interactions in Under Water Acoustic Simulations
Authors: Prabu Duplex
Abstract:
Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.Keywords: underwater acoustics, naval, maritime, cetaceans
Procedia PDF Downloads 20131 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals
Authors: W. Staiano, S. Marcora
Abstract:
Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.Keywords: cognitive training, perception of effort, endurance performance, neuro-performance
Procedia PDF Downloads 121130 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 186129 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 101128 Study Protocol: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta - A Singapore Case
Authors: Wee Tong Liaw, Elaine Wong Yee Sing
Abstract:
Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. The rationale for implementing a sustained health promoting workplace and participating in SHA is obvious when company management is convinced that healthier employees, business productivity, and profitability are positively correlated. However, performing research or empirical studies on the impact of a sustained health promoting workplace on stock returns are not likely to yield any interests in the absence of a systematic and independent assessment on the comprehensiveness and sustainability of a health promoting workplace in most developed economies. The principles of diversification and mean-variance efficient portfolio in Modern Portfolio Theory developed by Markowitz (1952) laid the foundation for the works of many financial economists and researchers, and among others, the development of the Capital Asset Pricing Model from the work of Sharpe (1964), Lintner (1965) and Mossin (1966), and the Fama-French Three-Factor Model of Fama and French (1992). This research seeks to support the rationale by studying whether there is a significant relationship or impact of a sustained health promoting workplace on the performance of companies listed on the SGX. The research shall form and test hypotheses pertaining to the impact of a sustained health promoting workplace on company’s performances, including stock returns, of companies that participated in the SHA and companies that did not participate in the SHA. In doing so, the research would be able to determine whether corporate and fund manager should consider the significance of a sustained health promoting workplace as a risk factor to explain the stock returns of companies listed on the SGX. With respect to Singapore’s stock market, this research will test the significance and relevance of a health promoting workplace using the Singapore Health Award as a proxy for non-diversifiable risk factor to explain stock returns. This study will examine the significance of a health promoting workplace on a company’s performance and study its impact on stock price performance and beta and examine if it has higher explanatory power than the traditional single factor asset pricing model CAPM (Capital Asset Pricing Model). To study the significance there are three key questions pertinent to the research study. I) Given a choice, would an investor be better off investing in a listed company with a sustained health promoting workplace i.e. a Singapore Health Award’s recipient? II) The Singapore Health Award has four levels of award starting from Bronze, Silver, Gold to Platinum. Would an investor be indifferent to the level of award when investing in a listed company who is a Singapore Health Award’s recipient? III) Would an asset pricing model combining FAMA-French Three Factor Model and ‘Singapore Health Award’ factor be more accurate than single factor Capital Asset Pricing Model and the Three Factor Model itself?Keywords: asset pricing model, company's performance, stock prices, sustained health promoting workplace
Procedia PDF Downloads 370127 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 233126 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors
Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria
Abstract:
The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels
Procedia PDF Downloads 167125 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review
Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni
Abstract:
Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing
Procedia PDF Downloads 71124 Employing Remotely Sensed Soil and Vegetation Indices and Predicting by Long Short-Term Memory to Irrigation Scheduling Analysis
Authors: Elham Koohikerade, Silvio Jose Gumiere
Abstract:
In this research, irrigation is highlighted as crucial for improving both the yield and quality of potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate soil moisture content, addressing the limitations of field data. Developed under the guidance of the Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing drought conditions and determining irrigation needs. This study validated the spectral characteristics of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture was developed using a machine learning approach combining model-based and satellite-based datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and times, with its accuracy verified through cross-validation and comparison with existing soil moisture datasets. The model effectively captures temporal dynamics, making it valuable for applications requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By identifying typical peak soil moisture values and observing distribution shapes, irrigation can be scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a uniform irrigation strategy might be effective across multiple parcels, with adjustments based on specific parcel characteristics and historical data trends. The application of the LSTM model to predict soil moisture and vegetation indices yielded mixed results. While the model effectively captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately predicting EVI, NDVI, and NMDI.Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation monitoring
Procedia PDF Downloads 43123 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks
Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci
Abstract:
The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization
Procedia PDF Downloads 130122 Barriers to Social Entrepreneurship by Refugees: An Explorative Study How Prior Experience Influences Social Orientation
Authors: D. M. Koers, A. J. Groen, P. D. Englis, R. Harms
Abstract:
We are witnessing the largest level of displacement of people since World War II. Refugees want to become independent as quickly as possible and build a new, safe future; however, access to the labor market is difficult and they face many problems that are not easily solved. This makes self-employment including social entrepreneurship a valuable alternative. Our research studied refugee-based entrepreneurship and examined whether prior knowledge, unmet personal needs and contextual factors influence how refugees recognize opportunities and if this influences their social orientation. In addition, we examine the barriers refugees face when starting up a company in the Netherlands. We use a case study design with a mixed-method approach, combining in-depth interviews and survey data. Data was collected from two Dutch entrepreneurial training programs in the Netherlands. We have a sample size of 27 latent refugee entrepreneurs. Our results show that refugees score high on the social entrepreneurial measures. They perceive themselves as having a strong social vision and are determined to defend a social need. They also score high on sustainability and state that their business ideas improve the quality of life on the long run. Based on these findings, we did not expect that only 5 participants had business ideas with a social orientation. In this group, 37,5% started a company before and 77.8% used their personal experience to come up with this business idea. Another 70,3% had the higher professional education or academic education. In the interviews, we found that they often copy and paste their gained experience from a previous profession on their new context and expect that it would work well. The social aspect lies in their cultural values and personal beliefs but is not reflected in their business models. One of the reasons could be that the context in which the refugee operates as a moderator suppressing the social mission and social value creation opportunities. Refugees are first and foremost focused on their survival. They do not want to be on social welfare and feel a strong need to be independent. Since they cannot access the labor market easily and face labor market discrimination they want to start a company. Another factor that explains lack of the social orientation in their business ideas is that social entrepreneurship is not a known concept in their home countries. Their idea of entrepreneurship differs substantially. We found that a huge barrier for refugees is their expectations about setting up a business, which are often not realistic because they have little knowledge about the system, institutions and corresponding red tape. In those instances, can the institutional configuration of a country, cultural differences, and perspective on entrepreneurship hinders social entrepreneurship. In conclusion, there might be latent potential for social entrepreneurship in refugees but there are many barriers to overcome. Overcoming these barriers can enhance local communities and enhance integration. In addition it has a positive financial impact on the host country because it reduces the pressure on the social system and stimulate the economy.Keywords: immigrant entrepreneurship, refugee entrepreneurship, social entrepreneurship, prior experience, opportunity recognition
Procedia PDF Downloads 165121 Pedagogy of the Oppressed: Fifty Years Later. Implications for Policy and Reforms
Authors: Mohammad Ibrahim Alladin
Abstract:
The Pedagogy of the Oppressed by Paulo Freire was first published in 1970. Since its publication it has become one of most cited book in the social sciences. Over a million copies have been sold worldwide. The Pedagogy of the Oppressed by Paulo Freire was published in 1970 (New York: Herder and Herder), The book has caused a “revolution” in the education world and his theory has been examined and analysed. It has influenced educational policy, curriculum development and teacher education. The revolution started half a century ago. “Paolo Freire’s Pedagogy of the Oppressed develops a theory of education fitted to the needs of the disenfranchised and marginalized members of capitalist societies. Combining educational and political philosophy, the book offers an analysis of oppression and a theory of liberation. Freire believes that traditional education serves to support the dominance of the powerful within society and thereby maintain the powerful’s social, political, and economic status quo. To overcome the oppression endemic to an exploitative society, education must be remade to inspire and enable the oppressed in their struggle for liberation. This new approach to education focuses on consciousness-raising, dialogue, and collaboration between teacher and student in the effort to achieve greater humanization for all. For Freire, education is political and functions either to preserve the current social order or to transform it. The theories of education and revolutionary action he offers in Pedagogy of the Oppressed are addressed educators committed to the struggle for liberation from oppression. Freire’s own commitment to this struggle developed through years of teaching literacy to Brazilian and Chilean peasants and laborers. His efforts at educational and political reform resulted in a brief period of imprisonment followed exile from his native Brazil for fifteen years. In Pedagogy of the Oppressed begins Freire asserts the importance of consciousness-raising, or conscientização, as the means enabling the oppressed to recognize their oppression and commit to the effort to overcome it, taking full responsibility for themselves in the struggle for liberation. He addresses the “fear of freedom,” which inhibits the oppressed from assuming this responsibility. He also cautions against the dangers of sectarianism, which can undermine the revolutionary purpose as well as serve as a refuge for the committed conservative. Freire provides an alternative view of education by attacking tradition education and knowledge. He is highly critical of how is imparted and how knowledge is structured that limits the learner’s thinking. Hence, education becomes oppressive and school functions as an institution of social control. Since its publication, education has gone through a series of reforms and in some areas total transformation. This paper addresses the following: The role of education in social transformation The teacher/learner relationship :Critical thinking The paper essentially examines what happened in the last fifty years since Freire’s book. It seeks to explain what happened to Freire’s education revolution, and what is the status of the movement that started almost fifty years ago.Keywords: pedagogy, reform, curriculum, teacher education
Procedia PDF Downloads 96120 From Victim to Ethical Agent: Oscar Wilde's The Ballad of Reading Gaol as Post-Traumatic Writing
Authors: Mona Salah El-Din Hassanein
Abstract:
Faced with a sudden, unexpected, and overwhelming event, the individual's normal cognitive processing may cease to function, trapping the psyche in "speechless terror", while images, feelings and sensations are experienced with emotional intensity. Unable to master such situation, the individual becomes a trauma victim who will be susceptible to traumatic recollections like intrusive thoughts, flashbacks, and repetitive re-living of the primal event in a way that blurs the distinction between past and present, and forecloses the future. Trauma is timeless, repetitious, and contagious; a trauma observer could fall prey to "secondary victimhood". Central to the process of healing the psychic wounds in the aftermath of trauma is verbalizing the traumatic experience (i.e., putting it into words) – an act which provides a chance for assimilation, testimony, and reevaluation. In light of this paradigm, this paper proposes a reading of Oscar Wilde's The Ballad of Reading Gaol, written shortly after his release from prison, as a post-traumatic text which traces the disruptive effects of the traumatic experience of Wilde's imprisonment for homosexual offences and the ensuing reversal of fortune he endured. Post-traumatic writing demonstrates the process of "working through" a trauma which may lead to the possibility of ethical agency in the form of a "survivor mission". This paper draws on fundamental concepts and key insights in literary trauma theory which is characterized by interdisciplinarity, combining the perspectives of different fields like critical theory, psychology, psychiatry, psychoanalysis, history, and social studies. Of particular relevance to this paper are the concepts of "vicarious traumatization" and "survivor mission", as The Ballad of Reading Gaol was written in response to Wilde's own prison trauma and the indirect traumatization he experienced as a result of witnessing the execution of a fellow prisoner whose story forms the narrative base of the poem. The Ballad displays Wilde's sense of mission which leads him to recognize the social as well as ethical implications of personal tragedy. Through a close textual analysis of The Ballad of Reading Gaol within the framework of literary trauma theory, the paper aims to: (a) demonstrate how the poem's thematic concerns, structure and rhetorical figures reflect the structure of trauma; (b) highlight Wilde's attempts to come to terms with the effects of the cataclysmic experience which transformed him into a social outcast; and (c) show how Wilde manages to transcend the victim status and assumes the role of ethical agent to voice a critique of the Victorian penal system and the standards of morality underlying the cruelties practiced against wrong doers and to solicit social action.Keywords: ballad of reading of reading, post-traumatic writing, trauma theory, Wilde
Procedia PDF Downloads 187119 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts
Authors: Ewelina Grabowska, Martyna Marchelek
Abstract:
Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation
Procedia PDF Downloads 222118 Artificial Cells Capable of Communication by Using Polymer Hydrogel
Authors: Qi Liu, Jiqin Yao, Xiaohu Zhou, Bo Zheng
Abstract:
The first artificial cell was produced by Thomas Chang in the 1950s when he was trying to make a mimic of red blood cells. Since then, many different types of artificial cells have been constructed from one of the two approaches: a so-called bottom-up approach, which aims to create a cell from scratch, and a top-down approach, in which genes are sequentially knocked out from organisms until only the minimal genome required for sustaining life remains. In this project, bottom-up approach was used to build a new cell-free expression system which mimics artificial cell that capable of protein expression and communicate with each other. The artificial cells constructed from the bottom-up approach are usually lipid vesicles, polymersomes, hydrogels or aqueous droplets containing the nucleic acids and transcription-translation machinery. However, lipid vesicles based artificial cells capable of communication present several issues in the cell communication research: (1) The lipid vesicles normally lose the important functions such as protein expression within a few hours. (2) The lipid membrane allows the permeation of only small molecules and limits the types of molecules that can be sensed and released to the surrounding environment for chemical communication; (3) The lipid vesicles are prone to rupture due to the imbalance of the osmotic pressure. To address these issues, the hydrogel-based artificial cells were constructed in this work. To construct the artificial cell, polyacrylamide hydrogel was functionalized with Acrylate PEG Succinimidyl Carboxymethyl Ester (ACLT-PEG2000-SCM) moiety on the polymer backbone. The proteinaceous factors can then be immobilized on the polymer backbone by the reaction between primary amines of proteins and N-hydroxysuccinimide esters (NHS esters) of ACLT-PEG2000-SCM, the plasmid template and ribosome were encapsulated inside the hydrogel particles. Because the artificial cell could continuously express protein with the supply of nutrients and energy, the artificial cell-artificial cell communication and artificial cell-natural cell communication could be achieved by combining the artificial cell vector with designed plasmids. The plasmids were designed referring to the quorum sensing (QS) system of bacteria, which largely relied on cognate acyl-homoserine lactone (AHL) / transcription pairs. In one communication pair, “sender” is the artificial cell or natural cell that can produce AHL signal molecule by synthesizing the corresponding signal synthase that catalyzed the conversion of S-adenosyl-L-methionine (SAM) into AHL, while the “receiver” is the artificial cell or natural cell that can sense the quorum sensing signaling molecule form “sender” and in turn express the gene of interest. In the experiment, GFP was first immobilized inside the hydrogel particle to prove that the functionalized hydrogel particles could be used for protein binding. After that, the successful communication between artificial cell-artificial cell and artificial cell-natural cell was demonstrated, the successful signal between artificial cell-artificial cell or artificial cell-natural cell could be observed by recording the fluorescence signal increase. The hydrogel-based artificial cell designed in this work can help to study the complex communication system in bacteria, it can also be further developed for therapeutic applications.Keywords: artificial cell, cell-free system, gene circuit, synthetic biology
Procedia PDF Downloads 152117 Synthesis and Characterization of Fibrin/Polyethylene Glycol-Based Interpenetrating Polymer Networks for Dermal Tissue Engineering
Authors: O. Gsib, U. Peirera, C. Egles, S. A. Bencherif
Abstract:
In skin regenerative medicine, one of the critical issues is to produce a three-dimensional scaffold with optimized porosity for dermal fibroblast infiltration and neovascularization, which exhibits high mechanical properties and displays sufficient wound healing characteristics. In this study, we report on the synthesis and characterization of macroporous sequential interpenetrating polymer networks (IPNs) combining skin wound healing properties of fibrin with the excellent physical properties of polyethylene glycol (PEG). Fibrin fibers serve as a provisional biologically active network to promote cell adhesion and proliferation while PEG provides the mechanical stability to maintain the entire 3D construct. After having modified both PEG and Serum Albumin (used for promoting enzymatic degradability) by adding methacrylate residues (PEGDM and SAM, respectively), Fibrin/PEGDM-SAM sequential IPNs were synthesized as follows: Macroporous sponges were first produced from PEGDM-SAM hydrogels by a freeze-drying technique and then rehydrated by adding the fibrin precursors. Environmental Scanning Electron Microscopy (ESEM) and Confocal Laser Scanning Microscopy (CLSM) were used to characterize their microstructure. Human dermal fibroblasts were cultivated during one week in the constructs and different cell culture parameters (viability, morphology, proliferation) were evaluated. Subcutaneous implantations of the scaffolds were conducted on five-week old male nude mice to investigate their biocompatibility in vivo. We successfully synthesized interconnected and macroporous Fibrin/PEGDM-SAM sequential IPNs. The viability of primary dermal fibroblasts was well maintained (above 90%) after 2 days of culture. Cells were able to adhere, spread and proliferate in the scaffolds suggesting the suitable porosity and intrinsic biologic properties of the constructs. The fibrin network adopted a spider web shape that covered partially the pores allowing easier cell infiltration into the macroporous structure. To further characterize the in vitro cell behavior, cell proliferation (EdU incorporation, MTS assay) is being studied. Preliminary histological analysis of animal studies indicated the persistence of hydrogels even after one-month post implantation and confirmed the absence of inflammation response, good biocompatibility and biointegration of our scaffolds within the surrounding tissues. These results suggest that our Fibrin/PEGDM-SAM IPNs could be considered as potential candidates for dermis regenerative medicine. Histological analysis will be completed to further assess scaffold remodeling including de novo extracellular matrix protein synthesis and early stage angiogenesis analysis. Compression measurements will be conducted to investigate the mechanical properties.Keywords: fibrin, hydrogels for dermal reconstruction, polyethylene glycol, semi-interpenetrating polymer network
Procedia PDF Downloads 237116 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 81115 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 388114 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin
Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng
Abstract:
The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin
Procedia PDF Downloads 76113 Metal Contents in Bird Feathers (Columba livia) from Mt Etna Volcano: Volcanic Plume Contribution and Biological Fractionation
Authors: Edda E. Falcone, Cinzia Federico, Sergio Bellomo, Lorenzo Brusca, Manfredi Longo, Walter D’Alessandro
Abstract:
Although trace metals are an essential element for living beings, they can become toxic at high concentrations. Their potential toxicity is related not only to the total content in the environment but mostly upon their bioavailability. Volcanoes are important natural metal emitters and they can deeply affect the quality of air, water and soils, as well as the human health. Trace metals tend to accumulate in the tissues of living organisms, depending on the metal contents in food, air and water and on the exposure time. Birds are considered as bioindicators of interest, because their feathers directly reflects the metals uptake from the blood. Birds are exposed to the atmospheric pollution through the contact with rainfall, dust, and aerosol, and they accumulate metals over the whole life cycle. We report on the first data combining the rainfall metal content in three different areas of Mt Etna, variably fumigated by the volcanic plume, and the metal contents in the feathers of pigeons, collected in the same areas. Rainfall samples were collected from three rain gauges placed at different elevation on the Eastern flank of the volcano, the most exposed to airborne plume, filtered, treated with HNO₃ Suprapur-grade and analyzed for Fe, Cr, Co, Ni, Se, Zn, Cu, Sr, Ba, Cd and As by ICP-MS technique, and major ions by ion chromatography. Feathers were collected from single individuals, in the same areas where the rain gauges were installed. Additionally, some samples were collected in an urban area, poorly interested by the volcanic plume. The samples were rinsed in MilliQ water and acetone, dried at 50°C until constant weight and digested in a mixture of 2:1 HNO₃ (65%) - H₂O₂ (30%) Suprapur-grade for 25-50 mg of sample, in a bath at near-to-boiling temperature. The solutions were diluted up to 20 ml prior to be analyzed by ICP-MS. The rainfall samples most contaminated by the plume were collected at close distance from the summit craters (less than 6 km), and show lower pH values and higher concentrations for all analyzed metals relative to those from the sites at lower elevation. Analyzed samples are enriched in both metals directly emitted by the volcanic plume and transported by acidic gases (SO₂, HCl, HF), and metals leached from the airborne volcanic ash. Feathers show different patterns in the different sites related to the exposure to natural or anthropogenic pollutants. They show abundance ratios similar to rainfall for lithophile elements (Ba, Sr), whereas are enriched in Zn and Se, known for their antioxidant properties, probably as adaptive response to oxidative stress induced by toxic metal exposure. The pigeons revealed a clear heterogeneity of metal uptake in the different parts of the volcano, as an effect of volcanic plume impact. Additionally, some physiological processes can modify the fate of some metals after uptake and this offer some insights for translational studies.Keywords: bioindicators, environmental pollution, feathers, trace metals, volcanic plume
Procedia PDF Downloads 143112 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density
Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany
Abstract:
Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination
Procedia PDF Downloads 264