Search results for: specific assessment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12754

Search results for: specific assessment

1894 Understanding the Interplay between Consumer Knowledge, Trust and Relationship Satisfaction in Financial Services

Authors: Torben Hansen, Lars Gronholdt, Alexander Josiassen, Anne Martensen

Abstract:

Consumers often exhibit a bias in their knowledge; they often think that they know more or less than they do. The concept of 'knowledge over/underconfidence' (O/U) has in previous studies been used to investigate such knowledge bias. O/U appears as a combination of subjective and objective knowledge. Subjective knowledge relates to consumers’ perception of their knowledge, while objective knowledge relates to consumers’ absolute knowledge measured by objective standards. This separation leads to three scenarios: The consumer can either be knowledge calibrated (subjective and objective knowledge are similar), overconfident (subjective knowledge exceeds objective knowledge) or underconfident (objective knowledge exceeds subjective knowledge). Knowledge O/U is a highly useful concept in understanding consumer choice behavior. For example, knowledge overconfident individuals are likely to exaggerate their ability to make right choices, are more likely to opt out of necessary information search, spend less time to carry out a specific task than less knowledge confident consumers, and are more likely to show high financial trading volumes. Through the use of financial services as a case study, this study contributes to previous research by examining how consumer knowledge O/U affects two types of trust (broad-scope trust and narrow-scope trust) and consumer relationship satisfaction. Trust does not only concern consumer trust in individual companies (i.e., narrow.-scope confidence NST), but also concerns consumer confidence in the broader business context in which consumers plan and implement their behavior (i.e., broad scope trust, BST). NST is defined as "the expectation that the service provider can be relied on to deliver on its promises’, while BST is defined as ‘the expectation that companies within a particular business type can generally be relied on to deliver on their promises.’ This study expands our understanding of the interplay between consumer knowledge bias, consumer trust, and relationship marketing in two main ways: First, it is demonstrated that the more knowledge O/U a consumer becomes, the higher/lower NST and levels of relationship satisfaction will be. Second, it is demonstrated that BST has a negative moderating effect on the relationship between knowledge O/U and satisfaction, such that knowledge O/U has a higher positive/negative effect on relationship satisfaction when BST is low vs. high. The data for this study comprises 756 mutual fund investors. Trust is particularly important in consumers’ mutual fund behavior because mutual funds have important responsibilities in providing financial advice and in managing consumers’ funds.

Keywords: knowledge, cognitive bias, trust, customer-seller relationships, financial services

Procedia PDF Downloads 301
1893 The Use of Rotigotine to Improve Hemispatial Neglect in Stroke Patients at the Charing Cross Neurorehabilitation Unit

Authors: Malab Sana Balouch, Meenakshi Nayar

Abstract:

Hemispatial Neglect is a common disorder primarily associated with right hemispheric stroke, in the acute phase of which it can occur up to 82% of the time. Such individuals fail to acknowledge or respond to people and objects in their left field of vision due to deficits in attention and awareness. Persistent hemispatial neglect significantly impedes post-stroke recovery, leading to longer hospital stays post-stroke, increased functional dependency, longer-term disability in ADLs and increased risk of falls. Recently, evidence has emerged for the use of dopamine agonist Rotigotine in neglect. The aim of our Quality Improvement Project (QIP) is to evaluate and better the current protocols and practice in assessment, documentation and management of neglect and rotigotine use at the Neurorehabilitation unit at Charing Cross Hospital (CNRU). In addition, it brings light to rotigotine use in the management of hemispatial neglect and paves the way for future research in the field. Our QIP was based in the CNRU. All patients admitted to the CNRU suffering from a right-sided stroke from 2nd of February 2018 to the 2nd of February 2021 were included in the project. Each patient’s multidisciplinary team report and hospital notes were searched for information, including bio-data, fulfilment of the inclusion criteria (having hemispatial neglect) and data related to rotigotine use. This includes whether or not the drug was administered, any contraindications to drug in patients that did not receive it, and any therapeutic benefits(subjective or objective improvement in neglect) in those that did receive the rotigotine. Data was simultaneously entered into excel sheet and further statistical analysis was done on SPSS 20.0. Out of 80 patients suffering from right sided strokes, 72.5% were infarcts and 27.5% were hemorrhagic strokes, with vast majority of both types of strokes were in the middle cerebral artery territory (MCA). A total of 31 (38.8%) of our patients were noted to have hemispatial neglect, with the highest number of cases being associated with MCA strokes. Almost half of our patients with MCA strokes suffered from neglect. Neglect was more common in male patients. Out of the 31 patients suffering from visuospatial neglect, only 16% actually received rotigotine and 80% of them were noted to have an objective improvement in their neglect tests and 20% revealed subjective improvement. After thoroughly going through neglect-associated documentation, the following recommendations/plans were put in place for the future. We plan to liaise with the occupational therapy team at our rehab unit to set a battery of tests that would be done on all patients presenting with neglect and recommend clear documentation of outcomes of each neglect screen under it. Also to create two proformas; one for the therapy team to aid in systematic documentation of neglect screens done prior to and after rotigotine administration and a second proforma for the medical team with clear documentation of rotigotine use, its benefits and any contraindications if not administered.

Keywords: hemispatial Neglect, right hemispheric stroke, rotigotine, neglect, dopamine agonist

Procedia PDF Downloads 73
1892 An ICF Framework for Game-Based Experiences in Geriatric Care

Authors: Marlene Rosa, Susana Lopes

Abstract:

Board games have been used for different purposes in geriatric care, demonstrating good results for health in general. However, there is not a conceptual framework that can help professionals and researchers in this area to design intervention programs or to think about future studies in this area. The aim of this study was to provide a pilot collection of board games’ serious purposes in geriatric care, using a WHO framework for health and disability. Study cases were developed in seven geriatric residential institutions from the center region in Portugal that are included in AGILAB program. The AGILAB program is a serious game-based method to train and spread out the implementation of board games in geriatric care. Each institution provides 2-hours/week of experiences using TATI Hand Game for serious purposes and then fulfill questions about a study-case (player characteristics; explain changes in players health according to this game experience). Two independent researchers read the information and classified it according to the International Classification for Functioning and Disability (ICF) categories. Any discrepancy was solved in a consensus meeting. Results indicate an important variability in body functions and structures: specific mental functions (e.g., b140 Attention functions, b144 Memory functions), b156 Perceptual functions, b2 sensory functions and pain (e.g., b230 Hearing functions; b265 Touch function; b280 Sensation of pain), b7 neuromusculoskeletal and movement-related functions (e.g., b730 Muscle power functions; b760 Control of voluntary movement functions; b710 Mobility of joint functions). Less variability was found in activities and participation domains, such as purposeful sensory experiences (d110-d129) (e.g., d115 Listening), communication (d3), d710 basic interpersonal interactions, d920 recreation and leisure (d9200 Play; d9205 Socializing). Concluding, this framework designed from a brief gamed-based experience includes mental, perceptual, sensory, neuromusculoskeletal, and movement-related functions and participation in sensory, communication, and leisure domains. More studies, including different experiences and a high number of users, should be developed to provide a more comprehensive ICF framework for game-based experiences in geriatric care.

Keywords: board game, aging, framework, experience

Procedia PDF Downloads 126
1891 Agricultural Education and Research in India: Challenges and Way Forward

Authors: Kiran Kumar Gellaboina, Padmaja Kaja

Abstract:

Agricultural Education and Research in India needs a transformation to serve the needs of the farmers and that of the nation. The fact that Agriculture and allied activities act as main source of livelihood for more than 70% population of rural India reinforces its importance in administrative and policy arena. As per Census 2011 of India it provides employment to approximately 56.6 % of labour. India has achieved significant growth in agriculture, milk, fish, oilseeds and fruits and vegetables owing to green, white, blue and yellow revolutions which have brought prosperity to farmers. Many factors are responsible for these achievement viz conducive government policies, receptivity of the farmers and also establishment of higher agricultural education institutions. The new breed of skilled human resources were instrumental in generating new technologies, and in its assessment, refinement and finally its dissemination to the farming community through extension methods. In order to sustain, diversify and realize the potential of agriculture sectors, it is necessary to develop skilled human resources. Agricultural human resource development is a continuous process undertaken by agricultural universities. The Department of Agricultural Research and Education (DARE) coordinates and promotes agricultural research & education in India. In India, agricultural universities were established on ‘land grant’ pattern of USA which helped incorporation of a number of diverse subjects in the courses as also provision of hands-on practical exposure to the student. The State Agricultural Universities (SAUs) established through the legislative acts of the respective states and with major financial support from them leading to administrative and policy controls. It has been observed that pace and quality of technology generation and human resource development in many of the SAUs has gone down. The reason for this slackening are inadequate state funding, reduced faculty strength, inadequate faculty development programmes, lack of modern infrastructure for education and research etc. Establishment of new state agricultural universities and new faculties/colleges without providing necessary financial and faculty support has aggrieved the problem. The present work highlights some of the key issues affecting agricultural education and research in India and the impact it would have on farm productivity and sustainability. Secondary data pertaining to budgetary spend on agricultural education and research will be analyzed. This paper will study the trends in public spending on agricultural education and research and the per capita income of farmers in India. This paper tries to suggest that agricultural education and research has a key role in equipping the human resources for enhanced agricultural productivity and sustainable use of natural resources. Further, a total re-orientation of agricultural education with emphasis on other agricultural related social sciences is needed for effective agricultural policy research.

Keywords: agriculture, challenges, education, research

Procedia PDF Downloads 232
1890 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive

Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).

Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates

Procedia PDF Downloads 155
1889 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 196
1888 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North-West, Nigeria

Authors: Muhammad Shafi’U Adamu

Abstract:

This study examined the effects of two groups of Cognitive Behavioral Therapies (CBT) which, includes Cognitive Restructuring (CB) and Rational Emotive Behavioral Therapy (REBT), on the Psychological Distress of awaiting-trial Inmates in Correctional Centers in North-West Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centers in North-west Nigeria. 131 awaiting trial inmates from three intact Correctional Centers were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to a pilot study using Cronbach's Alpha with a reliability coefficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean scores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicated the posttreatment reduction of psychological distress in the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference in those exposed to REBT. The research recommends that a standardized structured CBT counseling technique treatment should be designed for correctional centers across Nigeria, and CBT counseling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centers, rational emotive behavioral therapy

Procedia PDF Downloads 76
1887 Satellite Data to Understand Changes in Carbon Dioxide for Surface Mining and Green Zone

Authors: Carla Palencia-Aguilar

Abstract:

In order to attain the 2050’s zero emissions goal, it is necessary to know the carbon dioxide changes over time either from pollution to attenuations in the mining industry versus at green zones to establish real goals and redirect efforts to reduce greenhouse effects. Two methods were used to compute the amount of CO2 tons in specific mining zones in Colombia. The former by means of NPP with MODIS MOD17A3HGF from years 2000 to 2021. The latter by using MODIS MYD021KM bands 33 to 36 with maximum values of 644 data points distributed in 7 sites corresponding to surface mineral mining of: coal, nickel, iron and limestone. The green zones selected were located at the proximities of the studied sites, but further than 1 km to avoid information overlapping. Year 2012 was selected for method 2 to compare the results with data provided by the Colombian government to determine range of values. Some data was compared with 2022 MODIS energy values and converted to kton of CO2 by using the Greenhouse Gas Equivalencies Calculator by EPA. The results showed that Nickel mining was the least pollutant with 81 kton of CO2 e.q on average and maximum of 102 kton of CO2 e.q. per year, with green zones attenuating carbon dioxide in 103 kton of CO2 on average and 125 kton maximum per year in the last 22 years. Following Nickel, there was Coal with average kton of CO2 per year of 152 and maximum of 188, values very similar to the subjacent green zones with average and maximum kton of CO2 of 157 and 190 respectively. Iron had similar results with respect to 3 Limestone sites with average values of 287 kton of CO2 for mining and 310 kton for green zones, and maximum values of 310 kton for iron mining and 356 kton for green zones. One of the limestone sites exceeded the other sites with an average value of 441 kton per year and maximum of 490 kton per year, eventhough it had higher attenuation by green zones than a close Limestore site (3.5 Km apart): 371 kton versus 281 kton on average and maximum 416 kton versus 323 kton, such vegetation contribution is not enough, meaning that manufacturing process should be improved for the most pollutant site. By comparing bands 33 to 36 for years 2012 and 2022 from January to August, it can be seen that on average the kton of CO2 were similar for mining sites and green zones; showing an average yearly balance of carbon dioxide emissions and attenuation. However, efforts on improving manufacturing process are needed to overcome the carbon dioxide effects specially during emissions’ peaks because surrounding vegetation cannot fully attenuate it.

Keywords: carbon dioxide, MODIS, surface mining, vegetation

Procedia PDF Downloads 101
1886 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials

Authors: Melanie Kerschner

Abstract:

Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.

Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation

Procedia PDF Downloads 515
1885 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges

Authors: Ping-Hsiung Wang, Kuo-Chun Chang

Abstract:

There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.

Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation

Procedia PDF Downloads 125
1884 Screening Maize for Compatibility with F. Oxysporum to Enhance Striga asiatica (L.) Kuntze Resistance

Authors: Admire Isaac Tichafa Shayanowako, Mark Laing, Hussein Shimelis

Abstract:

Striga asiatica is among the leading abiotic constraints to maize production under small-holder farming communities in southern African. However, confirmed sources of resistance to the parasitic weed are still limited. Conventional breeding programmes have been progressing slowly due to the complex nature of the inheritance of Striga resistance, hence there is a need for more innovative approaches. This study aimed to achieve partial resistance as well as to breed for compatibility with Fusarium oxysporum fsp strigae, a soil fungus that is highly specific in its pathogenicity. The agar gel and paper roll assays in conjunction with a glass house pot trial were done to select genotypes based on their potential to stimulate germination of Striga and to test the efficacy of Fusarium oxysporum as a biocontrol agent. Results from agar gel assays showed a moderate to high potential in the release of Strigalactones among the 33 OPVs. Maximum Striga germination distances from the host root of 1.38 cm and up to 46% germination were observed in most of the populations. Considerable resistance was observed in a landrace ‘8lines’ which had the least Striga germination percentage (19%) with a maximum distance of 0.93 cm compared to the resistant check Z-DPLO-DTC1 that had 23% germination at a distance of 1.4cm. The number of fusarium colony forming units significantly deferred (P < 0.05) amongst the genotypes growing between germination papers. The number of crown roots, length of primary root and fresh weight of shoot and roots were highly correlated with concentration of fusarium macrospore counts. Pot trials showed significant differences between the fusarium coated and the uncoated treatments in terms of plant height, leaf counts, anthesis-silks intervals, Striga counts, Striga damage rating and Striga vigour. Striga emergence counts and Striga flowers were low in fusarium treated pots. Plants in fusarium treated pots had non-significant differences in height with the control treatment. This suggests that foxy 2 reduces the impact of Striga damage severity. Variability within fusarium treated genotypes with respect to traits under evaluation indicates the varying degree of compatibility with the biocontrol.

Keywords: maize, Striga asiaitca, resistance, compatibility, F. oxysporum

Procedia PDF Downloads 250
1883 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes

Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez

Abstract:

In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.

Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard

Procedia PDF Downloads 176
1882 Use of Activated Carbon from Olive Stone for CO₂ Capture in Porous Mortars

Authors: A. González-Caro, A. M. Merino-Lechuga, D. Suescum-Morales, E. Fernández-Ledesma, J. R. Jiménez, J. M. Fernández-Rodríguez

Abstract:

Climate change is one of the most significant issues today. Since the 19th century, the rise in temperature has not only been due to natural change, but also to human activities, which have been the main cause of climate change, mainly due to the burning of fossil fuels such as coal, oil and gas. The boom in the construction sector in recent years is also one of the main contributors to CO₂ emissions into the atmosphere; for example, for every tonne of cement produced, 1 tonne of CO₂ is emitted into the atmosphere. Most of the research being carried out in this sector is focused on reducing the large environmental impact generated during the manufacturing process of building materials. In detail, this research focuses on the recovery of waste from olive oil mills. Spain is the world's largest producer of olive oil, and this sector generates a large amount of waste and by-products such as olive pits, “alpechín” or “alpeorujo”. This olive stone by means of a pyrosilisis process gives rise to the production of active carbon. The process causes the carbon to develop many internal spaces. This study is based on the manufacture of porous mortars with Portland cement and natural limestone sand, with an addition of 5% and 10% of activated carbon. Two curing environments were used: i) dry chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration (approximately 0.04%); ii) accelerated carbonation chamber, with a humidity of 65 ± 10% and temperature of 21 ± 2 ºC and an atmospheric CO₂ concentration of 5%. In addition to eliminating waste from an industry, the aim of this study is to reduce atmospheric CO₂. For this purpose, first, a physicochemical and mineralogical characterisation of all raw materials was carried out, using techniques such as fluorescence and X-ray diffraction. The particle size and specific surface area of the activated carbon were determined. Subsequently, tests were carried out on the hardened mortar, such as thermogravimetric analysis (to determine the percentage of CO₂ capture), as well as mechanical properties, density, porosity, and water absorption. It was concluded that the activated carbon acts as a sink for CO₂, causing it to be trapped inside the voids. This increases CO₂ capture by 300% with the addition of 10% activated carbon at 7 days of curing. There was an increase in compressive strength of 17.5% with the CO₂ chamber after 7 days of curing using 10% activated carbon compared to the dry chamber.

Keywords: olive stone, activated carbon, porous mortar, CO₂ capture, economy circular

Procedia PDF Downloads 63
1881 Microalgae Hydrothermal Liquefaction Process Optimization and Comprehension to Produce High Quality Biofuel

Authors: Lucie Matricon, Anne Roubaud, Geert Haarlemmer, Christophe Geantet

Abstract:

Introduction: This case discusses the management of two floor of mouth (FOM) Squamous Cell Carcinomas (SCC) not identified upon initial biopsy. Case Report: A 51 year-old male presented with right FOM erythroleukoplakia. Relevant medical history included alcoholic dependence syndrome and alcoholic liver disease. Relevant drug therapy encompassed acamprosate, folic acid, hydroxocobalamin and thiamine. The patient had a 55.5 pack-year smoking history and alcohol dependence from age 14, drinking 16 units/day. FOM incisional biopsy and histopathological analysis diagnosed Carcinoma in situ. Treatment involved wide local excision. Specimen analysis revealed two separate foci of pT1 moderately differentiated SCCs. Carcinoma staging scans revealed no pathological lymphadenopathy, no local invasion or metastasis. SCCs had been excised in completion with narrow margins. MDT discussion concluded that in view of the field changes it would be difficult to identify specific areas needing further excision, although techniques such as Lugol’s Iodine were considered. Further surgical resection, surgical neck management and sentinel lymph node biopsy was offered. The patient declined intervention, primary management involved close monitoring alongside alcohol and smoking cessation referral. Discussion: Narrow excisional margins can increase carcinoma recurrence risk. Biopsy failed to identify SCCs, despite sampling an area of clinical concern. For gross field change multiple incisional biopsies should be considered to increase chance of accurate diagnosis and appropriate treatment. Coupling of tobacco and alcohol has a synergistic effect, exponentially increasing the relative risk of oral carcinoma development. Tobacco and alcoholic control is fundamental in reducing treatment‑related side effects, recurrence risk, and second primary cancer development.

Keywords: microalgae, biofuels, hydrothermal liquefaction, biomass

Procedia PDF Downloads 133
1880 The Affordances and Challenges of Online Learning and Teaching for Secondary School Students

Authors: Hahido Samaras

Abstract:

In many cases, especially with the pandemic playing a major role in fast-tracking the growth of the digital industry, online learning has become a necessity or even a standard educational model nowadays, reliably overcoming barriers such as location, time and cost and frequently combined with a face-to-face format (e.g., in blended learning). This being the case, it is evident that students in many parts of the world, as well as their parents, will increasingly need to become aware of the pros and cons of online versus traditional courses. This fast-growing mode of learning, accelerated during the years of the pandemic, presents an abundance of exciting options especially matched for a large number of secondary school students in remote places of the world where access to stimulating educational settings and opportunities for a variety of learning alternatives are scarce, adding advantages such as flexibility, affordability, engagement, flow and personalization of the learning experience. However, online learning can also present several challenges, such as a lack of student motivation and social interactions in natural settings, digital literacy, and technical issues, to name a few. Therefore, educational researchers will need to conduct further studies focusing on the benefits and weaknesses of online learning vs. traditional learning, while instructional designers propose ways of enhancing student motivation and engagement in virtual environments. Similarly, teachers will be required to become more and more technology-capable, at the same time developing their knowledge about their students’ particular characteristics and needs so as to match them with the affordances the technology offers. And, of course, schools, education programs, and policymakers will have to invest in powerful tools and advanced courses for online instruction. By developing digital courses that incorporate intentional opportunities for community-building and interaction in the learning environment, as well as taking care to include built-in design principles and strategies that align learning outcomes with learning assignments, activities, and assessment practices, rewarding academic experiences can derive for all students. This paper raises various issues regarding the effectiveness of online learning on students by reviewing a large number of research studies related to the usefulness and impact of online learning following the COVID-19-induced digital education shift. It also discusses what students, teachers, decision-makers, and parents have reported about this mode of learning to date. Best practices are proposed for parties involved in the development of online learning materials, particularly for secondary school students, as there is a need for educators and developers to be increasingly concerned about the impact of virtual learning environments on student learning and wellbeing.

Keywords: blended learning, online learning, secondary schools, virtual environments

Procedia PDF Downloads 100
1879 Nano-Sized Iron Oxides/ZnMe Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts for Degrading Specific Pharmaceutical Agents

Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja

Abstract:

Persistent organic pollutant discharged by various industries or urban regions into the aquatic ecosystems represent a serious threat to fauna and human health. The endocrine disrupting compounds are known to have toxic effects even at very low values of concentration. The anti-inflammatory agent Ibuprofen is an endocrine disrupting compound and is considered as model pollutant in the present study. The use of light energy to accomplish the latest requirements concerning wastewater discharge demands highly-performant and robust photo-catalysts. Many efforts have been paid to obtain efficient photo-responsive materials. Among the promising photo-catalysts, layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents Fe(II) self-supported on ZnMeLDHs (Me =Al3+, Fe3+) as novel efficient photo-catalysts for Fenton-like catalysis. The co-precipitation method was used to prepare ZnAlLDH, ZnFeAlLDH and ZnCrLDH (Zn2+/Me3+ = 2 molar ratio). Fe(II) was self-supported on the LDHs matrices by using the reconstruction method, at two different values of weight concentration. X-ray diffraction (XRD), thermogravimetric analysis (TG/DTG), Fourier transform infrared (FTIR) and transmission electron microscopy (TEM) were used to investigate the structural, textural, and micromorphology of the catalysts. The Fe(II)/ZnMeLDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively. The results point out that the embedment Fe(II) into ZnFeAlLDH and ZnCrLDH lead to a slight enhancement of ibuprofen degradation by light irradiation, whereas in case of ZnAlLDH, the degradation process is relatively low. A remarkable enhancement of ibuprofen degradation was found in the case of Fe(II)/ZnMeLDHs by photo-Fenton process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.

Keywords: layered double hydroxide, heterogeneous Fenton, micropollutant, photocatalysis

Procedia PDF Downloads 295
1878 Efficient Field-Oriented Motor Control on Resource-Constrained Microcontrollers for Optimal Performance without Specialized Hardware

Authors: Nishita Jaiswal, Apoorv Mohan Satpute

Abstract:

The increasing demand for efficient, cost-effective motor control systems in the automotive industry has driven the need for advanced, highly optimized control algorithms. Field-Oriented Control (FOC) has established itself as the leading approach for motor control, offering precise and dynamic regulation of torque, speed, and position. However, as energy efficiency becomes more critical in modern applications, implementing FOC on low-power, cost-sensitive microcontrollers pose significant challenges due to the limited availability of computational and hardware resources. Currently, most solutions rely on high-performance 32-bit microcontrollers or Application-Specific Integrated Circuits (ASICs) equipped with Floating Point Units (FPUs) and Hardware Accelerated Units (HAUs). These advanced platforms enable rapid computation and simplify the execution of complex control algorithms like FOC. However, these benefits come at the expense of higher costs, increased power consumption, and added system complexity. These drawbacks limit their suitability for embedded systems with strict power and budget constraints, where achieving energy and execution efficiency without compromising performance is essential. In this paper, we present an alternative approach that utilizes optimized data representation and computation techniques on a 16-bit microcontroller without FPUs or HAUs. By carefully optimizing data point formats and employing fixed-point arithmetic, we demonstrate how the precision and computational efficiency required for FOC can be maintained in resource-constrained environments. This approach eliminates the overhead performance associated with floating-point operations and hardware acceleration, providing a more practical solution in terms of cost, scalability and improved execution time efficiency, allowing faster response in motor control applications. Furthermore, it enhances system design flexibility, making it particularly well-suited for applications that demand stringent control over power consumption and costs.

Keywords: field-oriented control, fixed-point arithmetic, floating point unit, hardware accelerator unit, motor control systems

Procedia PDF Downloads 15
1877 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 75
1876 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 74
1875 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 302
1874 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity

Authors: Denise Bianco

Abstract:

Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.

Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity

Procedia PDF Downloads 97
1873 The Silent Tuberculosis: A Case Study to Highlight Awareness of a Global Health Disease and Difficulties in Diagnosis

Authors: Susan Scott, Dina Hanna, Bassel Zebian, Gary Ruiz, Sreena Das

Abstract:

Although the number of cases of TB in England has fallen over the last 4 years, it remains an important public health burden with 1 in 20 cases dying annually. The vast majority of cases present in non-UK born individuals with social risk factors. We present a case of non-pulmonary TB presenting in a healthy child born in the UK to professional parents. We present a case of a healthy 10 year old boy who developed acute back pain during school PE. Over the next 5 months, he was seen by various health and allied professionals with worsening back pain and kyphosis. He became increasing unsteady and for the 10 days prior to admission to our hospital, he developed fevers. He was admitted to his local hospital for tonsillitis where he suffered two falls on account of his leg weakness. A spinal X-ray revealed a pathological fracture and gibbus formation. He was transferred to our unit for further management. On arrival, the patient had lower motor neurone signs of his left leg. He underwent spinal fixture, laminectomy and decompression. Microbiology samples taken intra-operatively confirmed Mycobacterium Tuberculosis. He had a positive Mantoux and T-spot and treatment were commenced. There was no evidence of immune compromise. The patient was born in the UK, had a BCG scar and his only travel history had been two years prior to presentation when he travelled to the Phillipines for a short holiday. The patient continues to have issues around neuropathic pain, mobility, pill burden and mild liver side effects from treatment. Discussion: There is a paucity of case reports on spinal TB in paediatrics and diagnosis is often difficult due to the non-specific symptomatology. Although prognosis on treatment is good, a delayed diagnosis can have devastating consequences. This case highlights the continued need for higher index of suspicion and diagnosis in a world with changing patterns of migration and increase global travel. Surgical intervention is limited to the most serious cases to minimise further neurological damage and improve prognosis. There remains the need for a multi-disciplinary approach to deal with challenges of treatment and rehabilitation.

Keywords: tuberculosis, non-pulmonary TB, public health burden, diagnostic challenge

Procedia PDF Downloads 193
1872 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 119
1871 Ionometallurgy for Recycling Silver in Silicon Solar Panel

Authors: Emmanuel Billy

Abstract:

This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.

Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver

Procedia PDF Downloads 247
1870 Shame and Pride in Moral Self-Improvement

Authors: Matt Stichter

Abstract:

Moral development requires learning from one’s failures, but that turnsout to be especially challenging when dealing with moral failures. The distress prompted by moral failure can cause responses ofdefensiveness or disengagement rather than attempts to make amends and work on self-change. The most potentially distressing response to moral failure is a shame. However, there appears to be two different senses of “shame” that are conflated in the literature, depending on whether the failure is appraised as the result of a global and unalterable self-defect, or a local and alterable self-defect. One of these forms of shame does prompt self-improvement in response to moral failure. This occurs if one views the failure as indicating only a specific (local) defect in one’s identity, where that’s something repairable, rather than asanoverall(orglobal)defectinyouridentity that can’t be fixed. So, if the whole of one’s identity as a morally good person isn’t being called into question, but only a part, then that is something one could work on to improve. Shame, in this sense, provides motivation for self-improvement to fix this part oftheselfinthe long run, and this would be important for moral development. One factor that looks to affect these different self-attributions in the wake of moral failure can be found in mindset theory, as reactions to moral failure in these two forms of shame are similar to how those with a fixed or growth mindset of their own abilities, such as intelligence, react to failure. People fall along a continuum with respect to how they view abilities – it is more of a fixed entity that you cannot do much to change, or it is malleable such that you can train to improve it. These two mindsets, ‘fixed’ versus ‘growth’, have different consequences for how we react to failure – a fixed mindset leads to maladaptive responses because of feelings of helplessness to do better; whereas a growth mindset leads to adaptive responses where a person puts forth effort to learn how to act better the next time. Here we can see the parallels between a fixed mindset of one’s own (im)morality, as the way people respond to shame when viewed as indicating a global and unalterable self-defect parallels the reactions people have to failure when they have a fixed mindset. In addition, it looks like there may be a similar structure to pride. Pride is, like shame, a self-conscious emotion that arises from internal attributions about the self as being the cause of some event. There are also paradoxical results from research on pride, where pride was found to motivate pro-social behavior in some cases but aggression in other cases. Research suggests that there may be two forms of pride, authentic and hubristic, that are also connected to different self-attributions, depending on whether one is feeling proud about a particular (local) aspect of the self versus feeling proud about the whole of oneself (global).

Keywords: emotion, mindset, moral development, moral psychology, pride, shame, self-regulation

Procedia PDF Downloads 107
1869 Classification of Emotions in Emergency Call Center Conversations

Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko

Abstract:

The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.

Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning

Procedia PDF Downloads 398
1868 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 61
1867 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017

Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey

Abstract:

The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.

Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART

Procedia PDF Downloads 209
1866 זכור (Remember): An Analysis of Art as a Reflection of Sexual and Gendered Violence against Jewish Women during the Pogroms (1919-1920S) And the Nazi Era (1933-1943)

Authors: Isabella B. Davidman

Abstract:

Violence used against Jewish women in both the Eastern European pogroms and during the Nazi era was specifically gendered, targeting their female identity and dignity of womanhood. Not only did these acts of gendered violence dehumanize Jewish women, but they also hurt the Jewish community as a whole. The devastating sexual violence that women endured during the pogroms and the Nazi era caused profound trauma. Out of shame and fear, silence about women’s experiences of sexual abuse manifests in forms that words cannot translate. Women have turned to art and other means of storytelling to convey their female experiences in visual and non-verbal ways. Therefore, this paper aims to address the historical accounts of gendered violence against Jewish women during the pogroms and Nazi era, as well as art that reflects upon the female experience, in order to understand the emotional impact resulting from these events. To analyze the artwork, a feminist analysis was used to understand the intersection of gender with the other systems of inequality, such as systemic anti-semitism, in women’s lives; this ultimately explained the ways in which cultural productions undermine and reinforce the political and social oppression of women by exploring how art confronts the exploitation of women's bodies. By analyzing the art in the context of specific acts of violence, such as public rape, as a strategic weapon, we are able to understand women’s experiences and how these experiences, in turn, challenged their womanhood. Additionally, these atrocities, which often occurred in the public space, were dismissed and forgotten due to the social stigma of rape. In this sense, the experiences of women in pogroms and the Nazi era were both highly unacknowledged and forgotten. Therefore, the art that was produced during those time periods, as well as those after those events, gives voice to the profound silence on the narratives of Jewish women. Sexual violence is a weapon of war used to cause physical and psychological destruction, not only as a product of war. In both the early twentieth-century pogroms and the Holocaust, the sexual violence that Jewish women endured was fundamentally the same: the rape of Jewish women became a focal target in the theater of violence– women were not raped because they were women, but specifically, because they were Jewish women. Although the events of the pogroms and the Holocaust are in the past, the art that serves as testimony to the experience of Jewish women remains an everlasting reminder of the gendered violence that occurred. Even though covert expressions, such as an embroidered image of a bird eating an apple, the artwork gives voice to the many silenced victims of sexualized and gendered violence.

Keywords: gendered violence, holocaust, Nazi era, pogroms

Procedia PDF Downloads 104
1865 The Willingness to Pay of People in Taiwan for Flood Protection Standard of Regions

Authors: Takahiro Katayama, Hsueh-Sheng Chang

Abstract:

Due to the global climate change, it has increased the extreme rainfall that led to serious floods around the world. In recent years, urbanization and population growth also tend to increase the number of impervious surfaces, resulting in significant loss of life and property during floods especially for the urban areas of Taiwan. In the past, the primary governmental response to floods was structural flood control and the only flood protection standards in use were the design standards. However, these design standards of flood control facilities are generally calculated based on current hydrological conditions. In the face of future extreme events, there is a high possibility to surpass existing design standards and cause damages directly and indirectly to the public. To cope with the frequent occurrence of floods in recent years, it has been pointed out that there is a need for a different standard called FPSR (Flood Protection Standard of Regions) in Taiwan. FPSR is mainly used for disaster reduction and used to ensure that hydraulic facilities draining regional flood immediately under specific return period. FPSR could convey a level of flood risk which is useful for land use planning and reflect the disaster situations that a region can bear. However, little has been reported on FPSR and its impacts to the public in Taiwan. Hence, this study proposes a quantity procedure to evaluate the FPSR. This study aimed to examine FPSR of the region and public perceptions of and knowledge about FPSR, as well as the public’s WTP (willingness to pay) for FPSR. The research is conducted via literature review and questionnaire method. Firstly, this study will review the domestic and international research on the FPSR, and provide the theoretical framework of FPSR. Secondly, CVM (Contingent Value Method) has been employed to conduct this survey and using double-bounded dichotomous choice, close-ended format elicits households WTP for raising the protection level to understand the social costs. The samplings of this study are citizens living in Taichung city, Taiwan and 700 samplings were chosen in this study. In the end, this research will continue working on surveys, finding out which factors determining WTP, and provide some recommendations for adaption policies for floods in the future.

Keywords: climate change, CVM (Contingent Value Method), FPSR (Flood Protection Standard of Regions), urban flooding

Procedia PDF Downloads 249