Search results for: market testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6080

Search results for: market testing

140 The Relevance of (Re)Designing Professional Paths with Unemployed Working-Age Adults

Authors: Ana Rodrigues, Maria Cadilhe, Filipa Ferreira, Claudia Pereira, Marta Santos

Abstract:

Professional paths must be understood in the multiplicity of their possible configurations. While some actors tend to represent their path as a harmonious succession of positions in the life cycle, most recognize the existence of unforeseen and uncontrollable bifurcations, caused, for example, by a work accident or by going through a period of unemployment. Considering the intensified challenges posed by the ongoing societal changes (e.g., technological and demographic), and looking at the Portuguese context, where the unemployment rate continues to be more evident in certain age groups, like in individuals aged 45 years or over, it is essential to support those adults by providing strategies capable of supporting them during professional transitions, being this a joint responsibility of governments, employers, workers, educational institutions, among others. Concerned about those issues, Porto City Council launched the challenge of designing and implementing a Lifelong Career Guidance program, which was answered with the presentation of a customized conceptual and operational model: groWing|Lifelong Career Guidance. A pilot project targeting working-age adults (35 or older) who were unemployed was carried out, aiming to support them to reconstruct their professional paths, through the recovery of their past experiences and through a reflection about dimensions such as skills, interests, constraints, and labor market. A research action approach was used to assess the proposed model, namely the perceived relevance of the theme and of the project, by adults themselves (N=44), employment professionals (N=15) and local companies (N=15), in an integrated manner. A set of activities were carried out: a train the trainer course and a monitoring session with employment professionals; collective and individual sessions with adults, including a monitoring session as well; and a workshop with local companies. Support materials for individual/collective reflection about professional paths were created and adjusted for each involved agent. An evaluation model was co-build by different stakeholders. Assessment was carried through a form created for the purpose, completed at the end of the different activities, which allowed us to collect quantitative and qualitative data. Statistical analysis was carried through SPSS software. Results showed that the participants, as well as the employment professionals and the companies involved, considered both the topic and the project as extremely relevant. Also, adults saw the project as an opportunity to reflect on their paths and become aware of the opportunities and the necessary conditions to achieve their goals; the professionals highlighted the support given by an integrated methodology and the existence of tools to assist the process; companies valued the opportunity to think about the topic and the possible initiatives they could implement within the company to diversify their recruitment pool. The results allow us to conclude that, in the local context under study, there is an alignment between different agents regarding the pertinence of supporting adults with work experience in professional transitions, seeing the project as a relevant strategy to address this issue, which justifies that it can be extended in time and to other working-age adults in the future.

Keywords: professional paths, research action, turning points, lifelong career guidance, relevance

Procedia PDF Downloads 62
139 Sustainable Biostimulant and Bioprotective Compound for the Control of Fungal Diseases in Agricultural Crops

Authors: Geisa Lima Mesquita Zambrosi, Maisa Ciampi Guillardi, Flávia Rodrigues Patrício, Oliveiro Guerreiro Filho

Abstract:

Certified agricultural products are important components of the food industry. However, certifiers have been expanding the list of restricted or prohibited pesticides, limiting the options of products for phytosanitary control of plant diseases, but without offering alternatives to the farmers. Soybean and coffee leaf rust, brown eye spots, and Phoma leaf spots are the main fungal diseases that pose a serious threat to soybean and coffee cultivation worldwide. In conventional farming systems, these diseases are controlled by using synthetic fungicides, which, in addition to intensify the occurrence of fungal resistance, are highly toxic to the environment, farmers and consumers. In organic, agroecological, or regenerative farming systems, product options for plant protection are limited, being available only copper-based compounds, biodefensives or non-standard homemade products. Therefore, there is a growing demand for effective bioprotectors with low environmental impact for adoption in more sustainable agricultural systems. Then, to contribute with the covering of such a gap, we have developed a compound based on plant extracts and metallic elements for foliar application. This product has both biostimulant and bioprotective action, which promotes sustainable disease control, increases productivity as well as reduces the dependence on imported technologies the damages to the environment. The product's components have complementary mechanisms that promote protection against the disease by directly acting on the pathogens and activating the plant's natural defense system. The protective ability of the product against three coffee diseases (coffee leaf rust, brown eye spot, and Phoma leaf spot) and against soybean rust disease was evaluated, in addition to its ability to promote plant growth. Our goal is to offer an effective alternative to control the main coffee fungal diseases and soybean fungal diseases, with a biostimulant effect and low toxicity. The proposed product can also be part of the integrated management of coffee and soybean diseases in conventional farming associated with chemical and biological pesticides, offering the market a sustainable coffee and soybean with high added value and low residue content. Experiments were carried out under controlled conditions to evaluate the effectiveness of the product in controlling rust, phoma, and cercosporiosis in comparison to a control-inoculated plants that did not receive the product. The in vitro and in vivo effects of the product on the pathogen were evaluated using light microscopy and scanning electron microscopy, respectively. The fungistatic action of the product was demonstrated by a reduction of 85% and 95% in spore germination and disease symptoms severity on the leaves of coffee plants, respectively. The formulation had both a protective effect, acting to prevent infection by coffee leaf rust, and a curative effect, reducing the rust symptoms after its establishment.

Keywords: plant disease, natural fungicide, plant health, sustainability, alternative disease management

Procedia PDF Downloads 13
138 Advanced Bio-Fuels for Biorefineries: Incorporation of Waste Tires and Calcium-Based Catalysts to the Pyrolysis of Biomass

Authors: Alberto Veses, Olga Sanhauja, María Soledad Callén, Tomás García

Abstract:

The appropriate use of renewable sources emerges as a decisive point to minimize the environmental impact caused by fossil fuels use. Particularly, the use of lignocellulosic biomass becomes one of the best promising alternatives since it is the only carbon-containing renewable source that can produce bioproducts similar to fossil fuels and it does not compete with food market. Among all the processes that can valorize lignocellulosic biomass, pyrolysis is an attractive alternative because it is the only thermochemical process that can produce a liquid biofuel (bio-oil) in a simple way and solid and gas fractions that can be used as energy sources to support the process. However, in order to incorporate bio-oils in current infrastructures and further process in future biorefineries, their quality needs to be improved. Introducing different low-cost catalysts and/or incorporating different polymer residues to the process are some of the new, simple and low-cost strategies that allow the user to directly obtain advanced bio-oils to be used in future biorefineries in an economic way. In this manner, from previous thermogravimetric analyses, local agricultural wastes such as grape seeds (GS) were selected as lignocellulosic biomass while, waste tires (WT) were selected as polymer residue. On the other hand, CaO was selected as low-cost catalyst based on previous experiences by the group. To reach this aim, a specially-designed fixed bed reactor using N₂ as a carrier gas was used. This reactor has the peculiarity to incorporate a vertical mobile liner that allows the user to introduce the feedstock in the oven once the selected temperature (550 ºC) is reached, ensuring higher heating rates needed for the process. Obtaining a well-defined phase distribution in the resulting bio-oil is crucial to ensure the viability to the process. Thus, once experiments were carried out, not only a well-defined two layers was observed introducing several mixtures (reaching values up to 40 wt.% of WT) but also, an upgraded organic phase, which is the one considered to be processed in further biorefineries. Radical interactions between GS and WT released during the pyrolysis process and dehydration reactions enhanced by CaO can promote the formation of better-quality bio-oils. The latter was reflected in a reduction of water and oxygen content of bio-oil and hence, a substantial increase of its heating value and its stability. Moreover, not only sulphur content was reduced from solely WT pyrolysis but also potential and negative issues related to a strong acidic environment of conventional bio-oils were minimized due to its basic pH and lower total acid numbers. Therefore, acidic compounds obtained in the pyrolysis such as CO₂-like substances can react with the CaO and minimize acidic problems related to lignocellulosic bio-oils. Moreover, this CO₂ capture promotes H₂ production from water gas shift reaction favoring hydrogen-transfer reactions, improving the final quality of the bio-oil. These results show the great potential of grapes seeds to carry out the catalytic co-pyrolysis process with different plastic residues in order to produce a liquid bio-oil that can be considered as a high-quality renewable vector.

Keywords: advanced bio-oils, biorefinery, catalytic co-pyrolysis of biomass and waste tires, lignocellulosic biomass

Procedia PDF Downloads 212
137 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 39
136 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 102
135 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors

Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis

Abstract:

A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.

Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation

Procedia PDF Downloads 227
134 Beneath the Leisurely Surface: An Analysis of the Piano Lesson Frenzy among Chinese Middle-Class Parents

Authors: Yijie Wang, Tianyue Wang

Abstract:

In the past two decades, there has been a great ‘piano lesson frenzy’ among Chinese middle-class families, with a large number of parents adding piano training to children’s extra-curriculum lists. Superficially, the frenzy reflects a rather ‘leisurely’ attitude: parents typically claim that pianos lessons are ‘just for fun’ and will hopefully render children’s life more exciting. However, a closer scrutiny reveals that there is great social-status anxiety hidden beneath this ‘leisurely’ surface. Based on pre-interviews of six Chinese middle-class parents who have enthusiastically signed their children up for piano lessons, several tentative analysis are made: 1. Owing to a series of historical and social factors, the Chinese middle-class have yet to establish their cultural norms in the past few decades, resulting in great confusion concerning how to cultivate cultural tastes in their offspring. And partly due to the fact that the middle-class status of the past Chinese generation is mostly self-acquired rather than inherited, parents are much less confident about their cultural resources—which require long-time accumulation—than material ones. Both factors combine to lead to a sort of blind, overcompensating enthusiasm in culture-related education, and the piano frenzy is but a demonstration. 2. The piano has been chosen to be the object of the frenzy partly because of its inherent characteristics as well as socially-constructed ones. Costly, large in size, imported from another culture and so forth, the piano has acquired the meaning of being exclusive, high-end and exotic, which renders it a token of top-tier status among Chinese people, and piano lessons for offspring have therefore become parents’ paths towards a kind of ‘symbolic elevation’. A child playing piano is an exhibition as well as psychological assurance of the families’ middle-class status. 3. A closer look at children’s piano training process reveals that there is much more anxiety than leisurely elements involved. Despite parents’ claim that ‘piano is mainly for kids to have fun,’ the whole process is evidently of a rather ‘ascetic’ nature, with the demands of diligence and senses of time urgency throughout, and techniques rather than flair or styles are emphasized. This either means that the apparent ‘piano-for-fun’ stance is unauthentic and is only other motives in disguise, or that the Chinese middle-class parents are not yet capable of shaking off the sense of anxiety even if they sincerely intend to. 4. When viewed in relation to Chinese formal school system as well as the job market at large, it can be said that by signing children up for piano lessons, parents are consciously or unconsciously seeking to prepare for, or reduce the risks of, their children’s future social mobility. In face of possible failures in the highly-crucial, highly-competitive formal school system, piano-playing as an extra-curriculum activity may be conveniently transferred into an alternative career path. Besides, in contemporary China, as the occupational structure goes through change, and the school-related certificates decline in value, aspects such as a person’s overall deportment, which can be gained or proved by piano-learning, have been gaining in significance.

Keywords: extra-curriculum activities, middle class, piano lesson frenzy, status anxiety

Procedia PDF Downloads 217
133 Higher-Level Return to Female Karate Competition Following Multiple Patella Dislocations

Authors: A. Maso, C. Bellissimo, G. Facchinetti, N. Milani, D. Panzin, D. Pogliana, L. Garlaschelli, L. Rivaroli, S. Rivaroli, M. Zurek, J. Konin

Abstract:

15 year-old female karate athlete experienced two unilateral patella dislocations: one contact and one non-contact. This challenged her from competing as planned at the regional and national competitions as a result of her inability to perform at a high level. Despite these injuries and other complicated factors, she was able to modify her training timeline and successfully perform, winning third at the National Cup. Initial pain numeric rating scale 8/10 during karate training isometric figures, taking the stairs, long walking, a positive rasp test, palpation pain on the lateral patella joint 9/10, pain performing open kinetic chain 0°-45° and close kinetic chain 30°-90°, tensor fascia lata, vastus lateralis, psoas muscles retraction/stiffness. Foot hyper pronation, internally rotated femur, and knee flexion 15° were the postural findings. Exercise prescription for three days/week for three weeks to include exercise-based rehabilitation and soft tissue mobilization with massage and foam rolling. After three weeks, the pain was improved during activity daily living 5/10, and soft tissue stiffness decreased. An additional four weeks of exercise-based rehabilitation was continued. At this time, axial x-rays and TA-GT TAC were taken, and an orthopaedic medical check was recommended to continue conservative treatment. At week seven, she performed 2/4 karate position technique without pain and 2/4 with pain. An isokinetic test was performed at week 12, demonstrating a 10% strength deficit and 6% resistance deficit both to the left hamstrings. Moreover, an 8% strength and resistance surplus to the left quadriceps was found. No pain was present during activity, daily living and sports activity, allowing a return to play training to begin. A plan for the return to play framework collaborated with her trainer, her father, a physiotherapist, a sports scientist, an osteopath, and a nutritionist. Within 4 and 5 months, both non-athlete and athlete movement quality analysis tests were performed. The plan agreed to establish a return to play goal of 7 months and the highest level return to competition goal of 9 months from the start of rehabilitation. This included three days/week of training and repeated testing of movement quality before return to competition with detectable improvements from 77% to 93%. Beginning goals of the rehabilitation plan included the importance of a team approach. The patient’s father and trainer were important to collaborate with to assure a safe and timely return to competition. The possibility of achieving the goals was strongly related to orthopaedic decision-making and progress during the first weeks of rehabilitation. Without complications or setbacks, the patient can successfully return to her highest level of competition. The patient returned to participation after five months of rehabilitation and training, and then she returned to competition at the national level in nine months. The successful return was the result of a team approach and a compliant patient with clear goals.

Keywords: karate, knee, performance, rehabilitation

Procedia PDF Downloads 76
132 'iTheory': Mobile Way to Music Fundamentals

Authors: Marina Karaseva

Abstract:

The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.

Keywords: ear training, innovation in music education, music theory, mobile devices

Procedia PDF Downloads 180
131 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment

Authors: F. Uriel, M. M. Fernandez Liporace

Abstract:

In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.

Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support

Procedia PDF Downloads 96
130 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review

Authors: Anastasia Constantinou, Stephen Morris

Abstract:

Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.

Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare

Procedia PDF Downloads 33
129 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR

Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi

Abstract:

Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.

Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR

Procedia PDF Downloads 78
128 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach

Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie

Abstract:

Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.

Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling

Procedia PDF Downloads 40
127 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 204
126 Skin-to-Skin Contact Simulation: Improving Health Outcomes for Medically Fragile Newborns in the Neonatal Intensive Care Unit

Authors: Gabriella Zarlenga, Martha L. Hall

Abstract:

Introduction: Premature infants are at risk for neurodevelopmental deficits and hospital readmissions, which can increase the financial burden on the health care system and families. Kangaroo care (skin-to-skin contact) is a practice that can improve preterm infant health outcomes. Preterm infants can acquire adequate body temperature, heartbeat, and breathing regulation through lying directly on the mother’s abdomen and in between her breasts. Due to some infant’s condition, kangaroo care is not a feasible intervention. The purpose of this proof-of-concept research project is to create a device which simulates skin-to-skin contact for pre-term infants not eligible for kangaroo care, with the aim of promoting baby’s health outcomes, reducing the incidence of serious neonatal and early childhood illnesses, and/or improving cognitive, social and emotional aspects of development. Methods: The study design is a proof-of-concept based on a three-phase approach; (1) observational study and data analysis of the standard of care for 2 groups of pre-term infants, (2) design and concept development of a novel device for pre-term infants not currently eligible for standard kangaroo care, and (3) prototyping, laboratory testing, and evaluation of the novel device in comparison to current assessment parameters of kangaroo care. A single center study will be conducted in an area hospital offering Level III neonatal intensive care. Eligible participants include newborns born premature (28-30 weeks of age) admitted to the NICU. The study design includes 2 groups: a control group receiving standard kangaroo care and an experimental group not eligible for kangaroo care. Based on behavioral analysis of observational video data collected in the NICU, the device will be created to simulate mother’s body using electrical components in a thermoplastic polymer housing covered in silicone. It will be designed with a microprocessor that controls simulated respiration, heartbeat, and body temperature of the 'simulated caregiver' by using a pneumatic lung, vibration sensors (heartbeat), pressure sensors (weight/position), and resistive film to measure temperature. A slight contour of the simulator surface may be integrated to help position the infant correctly. Control and monitoring of the skin-to-skin contact simulator would be performed locally by an integrated touchscreen. The unit would have built-in Wi-Fi connectivity as well as an optional Bluetooth connection in which the respiration and heart rate could be synced with a parent or caregiver. A camera would be integrated, allowing a video stream of the infant in the simulator to be streamed to a monitoring location. Findings: Expected outcomes are stabilization of respiratory and cardiac rates, thermoregulation of those infants not eligible for skin to skin contact with their mothers, and real time mother Bluetooth to the device to mimic the experience in the womb. Results of this study will benefit clinical practice by creating a new standard of care for premature neonates in the NICU that are deprived of skin to skin contact due to various health restrictions.

Keywords: kangaroo care, wearable technology, pre-term infants, medical design

Procedia PDF Downloads 134
125 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges

Authors: Rafat Fazeli, Reza Fazeli

Abstract:

This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.

Keywords: the welfare state, social wage, The United States, limits to growth

Procedia PDF Downloads 184
124 A Randomized, Controlled Trial To Test Behavior Change Techniques (BCTS) To Improve Low Intensity Physical Activity In Older Adults

Authors: Ciaran Friel, Jerry Suls, Patrick Robles, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant response. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7, but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by the Fitbit for two weeks. Participants then engaged with a clinical research coordinator to review comprehension of the text message content and required actions for each of the BCTs to be tested. Participants then selected a consistent daily time in which they would receive their text message prompt. In the 8 week intervention phase of the study, participants received each of the four BCTs, in random order, for a two week period. Text message prompts were delivered daily at a time selected by the participant. All prompts required an interactive response from participants and may have included recording their detailed plan for walking or daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves with peers (self-monitoring, feedback). At the end of each two week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. Analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design, and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, 5 days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 103
123 Refurbishment Methods to Enhance Energy Efficiency of Brick Veneer Residential Buildings in Victoria

Authors: Hamid Reza Tabatabaiefar, Bita Mansoury, Mohammad Javad Khadivi Zand

Abstract:

The current energy and climate change impacts of the residential building sector in Australia are significant. Thus, the Australian Government has introduced more stringent regulations to improve building energy efficiency. In 2006, the Australian residential building sector consumed about 11% (around 440 Petajoule) of the total primary energy, resulting in total greenhouse gas emissions of 9.65 million tonnes CO2-eq. The gas and electricity consumption of residential dwellings contributed to 30% and 52% respectively, of the total primary energy utilised by this sector. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Employing sustainable design principles and effective use of construction materials can play a crucial role in improving thermal performance of new and existing buildings. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. One of the issues concerning the refurbishment of residential buildings is mostly the consumer market, where most work consists of moderate refurbishment jobs, often without assistance of an architect and partly without a building permit. There is an individual and often fragmental approach that results in lack of efficiency. Most importantly, the decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process, as a reflection of the design outcome. Finally, studies have identified the lack of knowledge, experience and best-practice examples as barriers in refurbishment projects. In the context of sustainable development and the need to reduce energy demand, refurbishing the ageing residential building constitutes a necessary action. Not only it does provide huge potential for energy savings, but it is also economically and socially relevant. Although the advantages have been identified, the guidelines come in the form of general suggestions that fail to address the diversity of each project. As a result, it has been recognised that there is a strong need to develop guidelines for optimised retrofitting of existing residential buildings in order to improve their energy performance. The current study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of residential brick veneer buildings in Victoria (Australia). Proposing different remedial solutions for improving the energy performance of residential brick veneer buildings, in the simulation stage, annual energy usage analyses have been carried out to determine heating and cooling energy consumptions of the buildings for different proposed retrofitting techniques. Then, the results of employing different retrofitting methods have been examined and compared in order to identify the most efficient and cost-effective remedial solution for improving the energy performance of those buildings with respect to the climate condition in Victoria and construction materials of the studied benchmark building.

Keywords: brick veneer residential buildings, building energy efficiency, climate change impacts, cost effective remedial solution, energy performance, sustainable design principles

Procedia PDF Downloads 264
122 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 49
121 Weaving Social Development: An Exploratory Study of Adapting Traditional Textiles Using Indigenous Organic Wool for the Modern Interior Textiles Market

Authors: Seema Singh, Puja Anand, Alok Bhasin

Abstract:

The interior design profession aims to create aesthetically pleasing design solutions for human habitats but of late, growing awareness about depleting environmental resources, both tangible and intangible, and damages to the eco-system led to the quest for creating healthy and sustainable interior environments. The paper proposes adapting traditionally produced organic wool textiles for the mainstream interior design industry. This can create sustainable livelihoods whereby eco-friendly bridges can be built between Interior designers and consumers and pastoral communities. This study focuses on traditional textiles produced by two pastoral communities from India that use organic wool from indigenous sheep varieties. The Gaddi communities of Himachal Pradesh use wool from the Gaddi sheep breed to create Pattu (a multi-purpose textile). The Kurumas of Telangana weave a blanket called the Gongadi, using wool from the Black Deccani variety of sheep. These communities have traditionally reared indigenous sheep breeds for their wool and produce hand-spun and hand-woven textiles for their own consumption, using traditional processes that are chemical free. Based on data collected personally from field visits and documentation of traditional crafts of these pastoral communities, and using traditionally produced indigenous organic wool, the authors have developed innovative textile samples by including design interventions and exploring dyeing and weaving techniques. As part of the secondary research, the role of pastoralism in sustaining the eco-systems of Himachal Pradesh and Telangana was studied, and also the role of organic wool in creating healthy interior environments. The authors found that natural wool from indigenous sheep breeds can be used to create interior textiles that have the potential to be marketed to an urban audience, and this will help create earnings for pastoral communities. Literature studies have shown that organic & sustainable wool can reduce indoor pollution & toxicity levels in interiors and further help in creating healthier interior environments. Revival of indigenous breeds of sheep can further help in rejuvenating dying crafts, and promotion of these indigenous textiles can help in sustaining traditional eco-systems and the pastoral communities whose way of life is endangered today. Based on research and findings, the authors propose that adapting traditional textiles can have potential for application in Interiors, creating eco-friendly spaces. Interior textiles produced through such sustainable processes can help reduce indoor pollution, give livelihood opportunities to traditional economies, and leave almost zero carbon foot-print while being in sync with available natural resources, hence ultimately benefiting the society. The win-win situation for all the stakeholders in this eco-friendly model makes it pertinent to re-think how we design lifestyle textiles for interiors. This study illustrates a specific example from the two pastoral communities and can be used as a model that can work equally well in any community, regardless of geography.

Keywords: design intervention, eco- friendly, healthy interiors, indigenous, organic wool, pastoralism, sustainability

Procedia PDF Downloads 128
120 National Core Indicators - Aging and Disabilities: A Person-Centered Approach to Understanding Quality of Long-Term Services and Supports

Authors: Stephanie Giordano, Rosa Plasencia

Abstract:

In the USA, in 2013, public service systems such as Medicaid, aging, and disability systems undertook an effort to measure the quality of service delivery by examining the experiences and outcomes of those receiving public services. The goal of this effort was to develop a survey to measure the experiences and outcomes of those receiving public services, with the goal of measuring system performance for quality improvement. The performance indicators were developed through with input from directors of state aging and disability service systems, along with experts and stakeholders in the field across the United States. This effort, National Core Indicators –Aging and Disabilities (NCI-AD), grew out of National Core Indicators –Intellectual and Developmental Disabilities, an effort to measure developmental disability (DD) systems across the States. The survey tool and administration protocol underwent multiple rounds of testing and revision between 2013 and 2015. The measures in the final tool – called the Adult Consumer Survey (ACS) – emphasize not just important indicators of healthcare access and personal safety but also includes indicators of system quality based on person-centered outcomes. These measures indicate whether service systems support older adults and people with disabilities to live where they want, maintain relationships and engage in their communities and have choice and control in their everyday lives. Launched in 2015, the NCI-AD Adult Consumer Survey is now used in 23 states in the US. Surveys are conducted by NCI-AD trained surveyors via direct conversation with a person receiving public long-term services and supports (LTSS). Until 2020, surveys were only conducted in person. However, after a pilot to test the reliability of videoconference and telephone survey modes, these modes were adopted as an acceptable practice. The nature of the survey is that of a “guided conversation” survey administration allows for surveyor to use wording and terminology that is best understand by the person surveyed. The survey includes a subset of questions that may be answered by a proxy respondent who knows the person well if the person is receiving services in unable to provide valid responses on their own. Surveyors undergo a standardized training on survey administration to ensure the fidelity of survey administration. In addition to the main survey section, a Background Information section collects data on personal and service-related characteristics of the person receiving services; these data are typically collected through state administrative record. This information is helps provide greater context around the characteristics of people receiving services. It has also been used in conjunction with outcomes measures to look at disparity (including by race and ethnicity, gender, disability, and living arrangements). These measures of quality are critical for public service delivery systems to understand the unique needs of the population of older adults and improving the lives of older adults as well as people with disabilities. Participating states may use these data to identify areas for quality improvement within their service delivery systems, to advocate for specific policy change, and to better understand the experiences of specific populations of people served.

Keywords: quality of life, long term services and supports, person-centered practices, aging and disability research, survey methodology

Procedia PDF Downloads 85
119 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 168
118 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade

Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma

Abstract:

Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.

Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments

Procedia PDF Downloads 268
117 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal

Abstract:

The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: automotive industry, FMEA, control plan, automotive technology

Procedia PDF Downloads 378
116 Particle Size Characteristics of Aerosol Jets Produced by A Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 14
115 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium

Authors: Binbin Chen, Dennis Y. C. Leung

Abstract:

Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.

Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge

Procedia PDF Downloads 252
114 Micro-Oculi Facades as a Sustainable Urban Facade

Authors: Ok-Kyun Im, Kyoung Hee Kim

Abstract:

We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.

Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades

Procedia PDF Downloads 230
113 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 616
112 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 263
111 Understanding Different Facets of Chromosome Abnormalities: A 17-year Cytogenetic Study and Indian Perspectives

Authors: Lakshmi Rao Kandukuri, Mamata Deenadayal, Suma Prasad, Bipin Sethi, Srinadh Buragadda, Lalji Singh

Abstract:

Worldwide; at least 7.6 million children are born annually with severe genetic or congenital malformations and among them 90% of these are born in mid and low-income countries. Precise prevalence data are difficult to collect, especially in developing countries, owing to the great diversity of conditions and also because many cases remain undiagnosed. The genetic and congenital disorder is the second most common cause of infant and childhood mortality and occurs with a prevalence of 25-60 per 1000 births. The higher prevalence of genetic diseases in a particular community may, however, be due to some social or cultural factors. Such factors include the tradition of consanguineous marriage, which results in a higher rate of autosomal recessive conditions including congenital malformations, stillbirths, or mental retardation. Genetic diseases can vary in severity, from being fatal before birth to requiring continuous management; their onset covers all life stages from infancy to old age. Those presenting at birth are particularly burdensome and may cause early death or life-long chronic morbidity. Genetic testing for several genetic diseases identifies changes in chromosomes, genes, or proteins. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use and more are being developed. Chromosomal abnormalities are the major cause of human suffering, which are implicated in mental retardation, congenital malformations, dysmorphic features, primary and secondary amenorrhea, reproductive wastage, infertility neoplastic diseases. Cytogenetic evaluation of patients is helpful in the counselling and management of affected individuals and families. We present here especially chromosomal abnormalities which form a major part of genetic disease burden in India. Different programmes on chromosome research and human reproductive genetics primarily relate to infertility since this is a major public health problem in our country, affecting 10-15 percent of couples. Prenatal diagnosis of chromosomal abnormalities in high-risk pregnancies helps in detecting chromosomally abnormal foetuses. Such couples are counselled regarding the continuation of pregnancy. In addition to the basic research, the team is providing chromosome diagnostic services that include conventional and advanced techniques for identifying various genetic defects. Other than routine chromosome diagnosis for infertility, also include patients with short stature, hypogonadism, undescended testis, microcephaly, delayed developmental milestones, familial, and isolated mental retardation, and cerebral palsy. Thus, chromosome diagnostics has found its applicability not only in disease prevention and management but also in guiding the clinicians in certain aspects of treatment. It would be appropriate to affirm that chromosomes are the images of life and they unequivocally mirror the states of human health. The importance of genetic counseling is increasing with the advancement in the field of genetics. The genetic counseling can help families to cope with emotional, psychological, and medical consequences of genetic diseases.

Keywords: India, chromosome abnormalities, genetic disorders, cytogenetic study

Procedia PDF Downloads 281