Search results for: microwave techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7140

Search results for: microwave techniques

150 Assessment of Rooftop Rainwater Harvesting in Gomti Nagar, Lucknow

Authors: Rajkumar Ghosh

Abstract:

Water scarcity is a pressing issue in urban areas, even in smart cities where efficient resource management is a priority. This scarcity is mainly caused by factors such as lifestyle changes, excessive groundwater extraction, over-usage of water, rapid urbanization, and uncontrolled population growth. In the specific case of Gomti Nagar, Lucknow, Uttar Pradesh, India, the depletion of groundwater resources is particularly severe, leading to a water imbalance and posing a significant challenge for the region's sustainable development. The aim of this study is to address the water shortage in the Gomti Nagar region by focusing on the implementation of artificial groundwater recharge methods. Specifically, the research aims to investigate the effectiveness of rainwater collection through rooftop rainwater harvesting systems (RTRWHs) as a sustainable approach to reduce aquifer depletion and bridge the gap between groundwater recharge and extraction. The research methodology for this study involves the utilization of RTRWHs as the main method for collecting rainwater. This approach is considered effective in managing and conserving water resources in a sustainable manner. The focus is on implementing RTRWHs in residential and commercial buildings to maximize the collection of rainwater and its subsequent utilization for various purposes in the Gomti Nagar region. The study reveals that the installation of RTRWHs in the Gomti Nagar region has a positive impact on addressing the water scarcity issue. Currently, RTRWHs cover only a small percentage (0.04%) of the total rainfall collected in the region. However, when RTRWHs are installed in all buildings, their influence on increasing water availability and reducing aquifer depletion will be significantly greater. The study also highlights the significant water imbalance of 24519 ML/yr in the region, emphasizing the urgent need for sustainable water management practices. This research contributes to the theoretical understanding of sustainable water management systems in smart cities. By highlighting the effectiveness of RTRWHs in reducing aquifer depletion, it emphasizes the importance of implementing such systems in urban areas. The findings of this study can serve as a basis for policymakers, urban planners, and developers to prioritize and incentivize the installation of RTRWHs as a potential solution to the water shortage crisis. The data for this study were collected through various sources such as government reports, surveys, and existing groundwater abstraction patterns. The collected data were then analysed to assess the current water situation, groundwater depletion rate, and the potential impact of implementing RTRWHs. Statistical analysis and modelling techniques were employed to quantify the water imbalance and evaluate the effectiveness of RTRWHs. The findings of this study demonstrate that the implementation of RTRWHs can effectively mitigate the water scarcity crisis in Gomti Nagar. By reducing aquifer depletion and bridging the gap between groundwater recharge and extraction, RTRWHs offer a sustainable solution to the region's water scarcity challenges. The study highlights the need for widespread adoption of RTRWHs in all buildings and emphasizes the importance of integrating such systems into the urban planning and development process. By doing so, smart cities like Gomti Nagar can achieve efficient water management, ensuring a better future with improved water availability for its residents.

Keywords: rooftop rainwater harvesting, rainwater, water management, aquifer

Procedia PDF Downloads 95
149 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 147
148 Organisational Mindfulness Case Study: A 6-Week Corporate Mindfulness Programme Significantly Enhances Organisational Well-Being

Authors: Dana Zelicha

Abstract:

A 6-week mindfulness programme was launched to improve the well being and performance of 20 managers (including the supervisor) of an international corporation in London. A unique assessment methodology was customised to the organisation’s needs, measuring four parameters: prioritising skills, listening skills, mindfulness levels and happiness levels. All parameters showed significant improvements (p < 0.01) post intervention, with a remarkable increase in listening skills and mindfulness levels. Although corporate mindfulness programmes have proven to be effective, the challenge remains the low engagement levels at home and the implementation of these tools beyond the scope of the intervention. This study has offered an innovative approach to enforce home engagement levels, which yielded promising results. The programme launched with a 2-day introduction intervention, which was followed by a 6-week training course (1 day a week; 2 hours each). Participants learned all basic principles of mindfulness such as mindfulness meditations, Mindfulness Based Stress Reduction (MBSR) techniques and Mindfulness Based Cognitive Therapy (MBCT) practices to incorporate into their professional and personal lives. The programme contained experiential mindfulness meditations and innovative mindfulness tools (OWBA-MT) created by OWBA - The Well Being Agency. Exercises included Mindful Meetings, Unitasking and Mindful Feedback. All sessions concluded with guided discussions and group reflections. One fundamental element of this programme was engagement level outside of the workshop. In the office, participants connected with a mindfulness buddy - a team member in the group with whom they could find support throughout the programme. At home, participants completed online daily mindfulness forms that varied according to weekly themes. These customised forms gave participants the opportunity to reflect on whether they made time for daily mindfulness practice, and to facilitate a sense of continuity and responsibility. At the end of the programme, the most engaged team member was crowned the ‘mindful maven’ and received a special gift. The four parameters were measured using online self-reported questionnaires, including the Listening Skills Inventory (LSI), Mindfulness Attention Awareness Scale (MAAS), Time Management Behaviour Scale (TMBS) and a modified version of the Oxford Happiness Questionnaire (OHQ). Pre-intervention questionnaires were collected at the start of the programme, and post-intervention data was collected 4-weeks following completion. Quantitative analysis using paired T-tests of means showed significant improvements, with a 23% increase in listening skills, a 22% improvement in mindfulness levels, a 12% increase in prioritising skills, and an 11% improvement in happiness levels. Participant testimonials exhibited high levels of satisfaction and the overall results indicate that the mindfulness programme substantially impacted the team. These results suggest that 6-week mindfulness programmes can improve employees’ capacities to listen and work well with others, to effectively manage time and to experience enhanced satisfaction both at work and in life. Limitations noteworthy to consider include the afterglow effect and lack of generalisability, as this study was conducted on a small and fairly homogenous sample.

Keywords: corporate mindfulness, listening skills, organisational well being, prioritising skills, mindful leadership

Procedia PDF Downloads 270
147 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 92
146 Capturing Healthcare Expert’s Knowledge Digitally: A Scoping Review of Current Approaches

Authors: Sinead Impey, Gaye Stephens, Declan O’Sullivan

Abstract:

Mitigating organisational knowledge loss presents challenges for knowledge managers. Expert knowledge is embodied in people and captured in ‘routines, processes, practices and norms’ as well as in the paper system. These knowledge stores have limitations in so far as they make knowledge diffusion beyond geography or over time difficult. However, technology could present a potential solution by facilitating the capture and management of expert knowledge in a codified and sharable format. Before it can be digitised, however, the knowledge of healthcare experts must be captured. Methods: As a first step in a larger project on this topic, a scoping review was conducted to identify how expert healthcare knowledge is captured digitally. The aim of the review was to identify current healthcare knowledge capture practices, identify gaps in the literature, and justify future research. The review followed a scoping review framework. From an initial 3,430 papers retrieved, 22 were deemed relevant and included in the review. Findings: Two broad approaches –direct and indirect- with themes and subthemes emerged. ‘Direct’ describes a process whereby knowledge is taken directly from subject experts. The themes identified were: ‘Researcher mediated capture’ and ‘Digital mediated capture’. The latter was further distilled into two sub-themes: ‘Captured in specified purpose platforms (SPP)’ and ‘Captured in a virtual community of practice (vCoP)’. ‘Indirect’ processes rely on extracting new knowledge using artificial intelligence techniques from previously captured data. Using this approach, the theme ‘Generated using artificial intelligence methods’ was identified. Although presented as distinct themes, some papers retrieved discuss combining more than one approach to capture knowledge. While no approach emerged as superior, two points arose from the literature. Firstly, human input was evident across themes, even with indirect approaches. Secondly, a range of challenges common among approaches was highlighted. These were (i) ‘Capturing an expert’s knowledge’- Difficulties surrounding capturing an expert’s knowledge related to identifying the ‘expert’ say from the very experienced and how to capture their tacit or difficult to articulate knowledge. (ii) ‘Confirming quality of knowledge’- Once captured, challenges noted surrounded how to validate knowledge captured and, therefore, quality. (iii) ‘Continual knowledge capture’- Once knowledge is captured, validated, and used in a system; however, the process is not complete. Healthcare is a knowledge-rich environment with new evidence emerging frequently. As such, knowledge needs to be reviewed, updated, or removed (redundancy) as appropriate. Although some methods were proposed to address this, such as plausible reasoning or case-based reasoning, conclusions could not be drawn from the papers retrieved. It was, therefore, highlighted as an area for future research. Conclusion: The results described two broad approaches – direct and indirect. Three themes were identified: ‘Researcher mediated capture (Direct)’; ‘Digital mediated capture (Direct)’ and ‘Generated using artificial intelligence methods (Indirect)’. While no single approach was deemed superior, common challenges noted among approaches were: ‘capturing an expert’s knowledge’, ‘confirming quality of knowledge’, and ‘continual knowledge capture’. However, continual knowledge capture was not fully explored in the papers retrieved and was highlighted as an important area for future research. Acknowledgments: This research is partially funded by the ADAPT Centre under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Keywords: expert knowledge, healthcare, knowledge capture and knowledge management

Procedia PDF Downloads 133
145 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 43
144 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System

Authors: Anas Hallak, Latifa Seblini, Juergen Wilde

Abstract:

In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.

Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive

Procedia PDF Downloads 193
143 Neighborhood Relations in a Context of Cultural and Social Diversity - Qualitative Analysis of a Case Study in a Territory in the inner City of Lisbon

Authors: Madalena Corte-real, João Pedro Nunes, Bernardo Fernandes, Ana Jorge Correira

Abstract:

This presentation looks, from a sociological perspective, at neighboring practices in the inner city of Lisbon. The capital of Portugal, with half a million inhabitants, inserted in a metropolitan area with almost 2,9 million people, has been in the international spotlight seen as an interesting city to live in and to invest in, especially in the real estate market. This promotion emerged in the context of the financial crisis, where local authorities aimed to make Lisbon a more competitive city, calling for visitors and financial and human capital. Especially in the last decade, Portugal’s capital has been experiencing a significant increase in terms of migration from creative and entrepreneurial exiles to economic and political expats. In this context, the territory under analysis, in particular, is a mixed-used area undergoing rapid transformations in recent years marked by the presence of newcomers and non-nationals as well as social and cultural heterogeneity. It is next to one of the main arteries, considered the most multicultural part of the city, and presented in the press as one of the coolest neighborhoods in Europe. In view of these aspects, this research aims to address key-topics in current urban research: anonymity often related to big cities, socio-spatial attachment to the neighborhood, and the effects of diversity in the everyday relations of residents and shopkeepers. This case-study intends to look at particularities in local regimes differently affected by growing mobility. Against a backdrop of unidimensional generalizations and a tendency to refer to central countries and global cities, it aims to discuss national and local specificities. In methodological terms, the project comprises essentially a qualitative approach that consists of direct observation techniques and ethnographic methods as well semi-structured interviews to residents and local stakeholders whose narratives are subject to content analysis. The paper starts with a characterization of the broader context of the city of Lisbon, followed by territorial specificities regarding socio-spatial development, namely the city’s and the inner-areas morphology as well as the population’s socioeconomic profile. Following the residents and stakeholders’ narratives and practices it will assess the perception and behaviors regarding the representation of the area, relationships and experiences, routines, and sociability. Results point to a significant presence of neighborhood relations and different forms of support, in particular, among the different groups – e.g., old long-time residents, middle-class families, global creative class, and communities of economic migrants. Fieldwork reveals low levels of place-attachment although some residents refer, presently, high levels of satisfaction. Engagement with living space, this case-study suggests, reveals the social construction and lived the experience of neighboring by different groups, but also the way different and contrasting visions and desires are articulated to the profound urban, cultural and political changes that permeate the area.

Keywords: diversity, lisbon, neighboring and neighborhood, place-attachment

Procedia PDF Downloads 108
142 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space

Authors: V. Jayanthi

Abstract:

Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.

Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids

Procedia PDF Downloads 280
141 Accurate Energy Assessment Technique for Mine-Water District Heat Network

Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones

Abstract:

UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.

Keywords: heat demand, heat pump, renewable energy, retrofit

Procedia PDF Downloads 92
140 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning

Authors: Pinzhe Zhao

Abstract:

This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.

Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity

Procedia PDF Downloads 20
139 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging

Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee

Abstract:

To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.

Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging

Procedia PDF Downloads 236
138 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 416
137 Construction Engineering and Cocoa Agriculture: A Synergistic Approach for Improved Livelihoods of Farmers

Authors: Felix Darko-Amoah, Daniel Acquah

Abstract:

In contemporary ecosystems for developing countries like Ghana, the need to explore innovative solutions for sustainable livelihoods of farmers is more important than ever. With Ghana’s population growing steadily and the demand for food, fiber and shelter increasing, it is imperative that the construction industry and agriculture come together to address the challenges faced by farmers in the country. In order to enhance the livelihoods of cocoa farmers in Ghana, this paper provides an innovative strategy that aims to integrate the areas of civil engineering and cash crop agriculture. This study focuses on cocoa cultivation in poorer nations, where farmers confront a variety of difficulties include restricted access to financing, subpar infrastructure, and insufficient support services. We seek to improve farmers' access to financing, improve infrastructure, and provide support services that are essential to their success by combining the fields of building engineering and cocoa production. The findings of the study are beneficial to cocoa producers, community extension agents, and construction engineers. In order to accomplish our objectives, we conducted 307 of field investigations in particular cocoa growing communities in the Western Region of Ghana. Several studies have shown that there is a lack of adequate infrastructure and financing, leading to low yields, subpar beans, and low farmer profitability in developing nations like Ghana. Our goal is to give farmers access to better infrastructure, better financing, and support services that are crucial to their success through the fusion of construction engineering and cocoa production. Based on data gathered from the field investigations, the results show that the employment of appropriate technology and methods for developing structures, roads, and other infrastructure in rural regions is one of the essential components of this strategy. For instance, we find that using affordable, environmentally friendly materials like bamboo, rammed earth, and mud bricks can assist to cut expenditures while also protecting the environment. By applying simple relational techniques to the data gathered, the results also show that construction engineers are crucial in planning and building infrastructure that is appropriate for the local environment and circumstances and resilient to natural disasters like floods. Thus, the convergence of construction engineering and cash crop cultivation is another crucial component of the agriculture-construction interplay. For instance, farmers can receive financial assistance to buy essential inputs, such as seeds, fertilizer, and tools, as well as training in proper farming methods. Moreover, extension services can be offered to assist farmers in marketing their crops and enhancing their livelihoods and revenue. In conclusion, our analysis of responses from the 307 participants depicts that the combination of construction engineering and cash crop agriculture offers an innovative approach to improving farmers' livelihoods in cocoa farming communities in Ghana. In conclusion, by inculcating the findings of this study into core decision-making, policymakers can help farmers build sustainable and profitable livelihoods by addressing challenges such as limited access to financing, poor infrastructure, and inadequate support services.

Keywords: cocoa agriculture, construction engineering, farm buildings and equipment, improved livelihoods of farmers

Procedia PDF Downloads 90
136 Governance of Climate Adaptation Through Artificial Glacier Technology: Lessons Learnt from Leh (Ladakh, India) In North-West Himalaya

Authors: Ishita Singh

Abstract:

Social-dimension of Climate Change is no longer peripheral to Science, Technology and Innovation (STI). Indeed, STI is being mobilized to address small farmers’ vulnerability and adaptation to Climate Change. The experiences from the cold desert of Leh (Ladakh) in North-West Himalaya illustrate the potential of STI to address the challenges of Climate Change and the needs of small farmers through the use of Artificial Glacier Techniques. Small farmers have a unique technique of water harvesting to augment irrigation, called “Artificial Glaciers” - an intricate network of water channels and dams along the upper slope of a valley that are located closer to villages and at lower altitudes than natural glaciers. It starts to melt much earlier and supplements additional irrigation to small farmers’ improving their livelihoods. Therefore, the issue of vulnerability, adaptive capacity and adaptation strategy needs to be analyzed in a local context and the communities as well as regions where people live. Leh (Ladakh) in North-West Himalaya provides a Case Study for exploring the ways in which adaptation to Climate Change is taking place at a community scale using Artificial Glacier Technology. With the above backdrop, an attempt has been made to analyze the rural poor households' vulnerability and adaptation practices to Climate Change using this technology, thereby drawing lessons on vulnerability-livelihood interactions in the cold desert of Leh (Ladakh) in North-West Himalaya, India. The study is based on primary data and information collected from 675 households confined to 27 villages of Leh (Ladakh) in North-West Himalaya, India. It reveals that 61.18% of the population is driving livelihoods from agriculture and allied activities. With increased irrigation potential due to the use of Artificial Glaciers, food security has been assured to 77.56% of households and health vulnerability has been reduced in 31% of households. Seasonal migration as a livelihood diversification mechanism has declined in nearly two-thirds of households, thereby improving livelihood strategies. Use of tactical adaptations by small farmers in response to persistent droughts, such as selling livestock, expanding agriculture lands, and use of relief cash and foods, have declined to 20.44%, 24.74% and 63% of households. However, these measures are unsustainable on a long-term basis. The role of policymakers and societal stakeholders becomes important in this context. To address livelihood challenges, the role of technology is critical in a multidisciplinary approach involving multilateral collaboration among different stakeholders. The presence of social entrepreneurs and new actors on the adaptation scene is necessary to bring forth adaptation measures. Better linkage between Science and Technology policies, together with other policies, should be encouraged. Better health care, access to safe drinking water, better sanitary conditions, and improved standards of education and infrastructure are effective measures to enhance a community’s adaptive capacity. However, social transfers for supporting climate adaptive capacity require significant amounts of additional investment. Developing institutional mechanisms for specific adaptation interventions can be one of the most effective ways of implementing a plan to enhance adaptation and build resilience.

Keywords: climate change, adaptation, livelihood, stakeholders

Procedia PDF Downloads 70
135 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 67
134 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 270
133 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators

Authors: Wei Ji

Abstract:

This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.

Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis

Procedia PDF Downloads 308
132 Plant Regeneration via Somatic Embryogenesis and Agrobacterium-Mediated Transformation in Alfalfa (Medicago sativa L.)

Authors: Sarwan Dhir, Suma Basak, Dipika Parajulee

Abstract:

Alfalfa is renowned for its nutritional and biopharmaceutical value as a perennial forage legume. However, establishing a rapid plant regeneration protocol using somatic embryogenesis and efficient transformation frequency are the crucial prerequisites for gene editing in alfalfa. This study was undertaken to establish and improve the protocol for somatic embryogenesis and subsequent plant regeneration. The experiments were conducted in response to natural sensitivity using various antibiotics such as cefotaxime, carbenicillin, gentamycin, hygromycin, and kanamycin. Using 3-week-old leaf tissue, somatic embryogenesis was initiated on Gamborg’s B5 basal (B5H) medium supplemented with 3% maltose, 0.9µM Kinetin, and 4.5µM 2,4-D. Embryogenic callus (EC) obtained from the B5H medium exhibited a high rate of somatic embryo formation (97.9%) after 3 weeks when the cultures were placed in the dark. Different developmental stages of somatic embryos and cotyledonary stages were then transferred to Murashige and Skoog’s (MS) basal medium under light, resulting in a 94% regeneration rate of plantlets. Our results indicate that leaf segments can grow (tolerate) up to 450 mg/L of cefotaxime and 400 mg/L of carbenicillin in the culture medium. However, the survival threshold for hygromycin at 12.5 mg/L, kanamycin at 250 mg/L, gentamycin at 50 mg/L, and timentin (300 mg/L). The experiment to improve the protocol for achieving efficient transient gene expression in alfalfa through genetic transformation with the Agrobacterium tumefaciens pCAMBIA1304 vector was also conducted. The vector contains two reporter genes such as β-glucuronidase (GUS) and green fluorescent protein (GFP), along with a selectable hygromycin B phosphotransferase gene (HPT), all driven under the CaMV 35s promoter. Various transformation parameters were optimized using 3-week-old in vitro-grown plantlets. The different parameters such as types of explant, leaf ages, preculture days, segment sizes, wounding types, bacterial concentrations, infection periods, co-cultivation periods, different concentrations of acetosyringone, silver nitrate, and calcium chloride were optimized for transient gene expression. The transient gene expression was confirmed via histochemical GUS and GFP visualization under fluorescent microscopy. The data were analyzed based on the semi-quantitative observation of the percentage and number of blue GUS spots on different days of agro-infection. The highest percentage of GUS positivity (76.2%) was observed in 3-week-old leaf segments wounded using a scalpel blade of 11 size- after 3 days of post-incubation at a bacterial concentration of 0.6, with 2 days of preculture, 30 min of bacterial-leaf segment co-cultivation, with the addition of 150 µM acetosyringone, 4 mM calcium chloride, and 75 µM silver nitrate. Our results suggest that various factors influence T-DNA delivery in the Agrobacterium-mediated transformation of alfalfa. The stable gene expression in the putative transgenic tissue was confirmed using PCR amplification of both marker genes, indicating that gene expression in explants was not solely due to Agrobacterium, but also from transformed cells. The improved protocol could be used for generating transgenic alfalfa plants using genome editing techniques such as CRISPR/Cas9.

Keywords: Medicago sativa l. (Alfalfa), agrobacterium tumefaciens, β-glucuronidase, green fluorescent protein, transient gene

Procedia PDF Downloads 10
131 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 177
130 Exploring Perspectives and Complexities of E-tutoring: Insights from Students Opting out of Online Tutor Service

Authors: Prince Chukwuneme Enwereji, Annelien Van Rooyen

Abstract:

In recent years, technology integration in education has transformed the learning landscape, particularly in online institutions. One technological advancement that has gained popularity is e-tutoring, which offers personalised academic support to students through online platforms. While e-tutoring has become well-known and has been adopted to promote collaborative learning, there are still students who do not use these services for various reasons. However, little attention has been given to understanding the perspectives of students who have not utilized these services. The research objectives include identifying the perceived benefits that non-e-tutoring students believe e-tutoring could offer, such as enhanced academic support, personalized learning experiences, and improved performance. Additionally, the study explored the potential drawbacks or concerns that non-e-tutoring students associate with e-tutoring, such as concerns about efficacy, a lack of face-to-face interaction, and platform accessibility. The study adopted a quantitative research approach with a descriptive design to gather and analyze data on non-e-tutoring students' perspectives. Online questionnaires were employed as the primary data collection method, allowing for the efficient collection of data from many participants. The collected data was analyzed using the Statistical Package for the Social Sciences (SPSS). Ethical concepts such as informed consent, anonymity of responses and protection of respondents against harm were maintained. Findings indicate that non-e-tutoring students perceive a sense of control over their own pace of learning, suggesting a preference for self-directed learning and the ability to tailor their educational experience to their individual needs and learning styles. They also exhibit high levels of motivation, believe in their ability to effectively participate in their studies and organize their academic work, and feel comfortable studying on their own without the help of e-tutors. However, non-e-tutoring students feel that e-tutors do not sufficiently address their academic needs and lack engagement. They also perceive a lack of clarity in the roles of e-tutors, leading to uncertainty about their responsibilities. In terms of communication, students feel overwhelmed by the volume of announcements and find repetitive information frustrating. Additionally, some students face challenges with their internet connection and associated cost, which can hinder their participation in online activities. Furthermore, non-e-tutoring students express a desire for interactions with their peers and a sense of belonging to a group or team. They value opportunities for collaboration, teamwork in their learning experience, the importance of fostering social interactions and creating a sense of community in online learning environments. This study recommended that students seek alternate support systems by reaching out to professors or academic advisors for guidance and clarification. Developing self-directed learning skills is essential, empowering students to take charge of their own learning through setting objectives, creating own study plans, and utilising resources. For HEIs, it was recommended that they should ensure that a variety of support services are available to cater to the needs of all students, including non-e-tutoring students. HEIs should also ensure easy access to online resources, promote a supportive community, and regularly evaluate and adapt their support techniques to meet students' changing requirements.

Keywords: online-tutor;, student support;, online education, educational practices, distance education

Procedia PDF Downloads 82
129 Spatial Variation in Urbanization and Slum Development in India: Issues and Challenges in Urban Planning

Authors: Mala Mukherjee

Abstract:

Background: India is urbanizing very fast and urbanisation in India is treated as one of the most crucial components of economic growth. Though the pace of urbanisation (31.6 per cent in 2011) is however slower and lower than the average for Asia but the absolute number of people residing in cities and towns has increased substantially. Rapid urbanization leads to urban poverty and it is well represented in slums. Currently India has four metropolises and 53 million plus cities. All of them have significant slum population but the standard of living and success of slum development programmes varies across regions. Objectives: Objectives of the paper are to show how urbanisation and slum development varies across space; to show spatial variation in the standard of living in Indian slums; to analyse how the implementation of slum development policies like JNNURM, Rajiv Awas Yojana varies across cities and bring different results in different regions and what are the factors responsible for such variation. Data Sources and Methodology: Census 2011 data on urban population and slum households and amenities have been used for analysing the regional variation of urbanisation in 53 million plus cities of India. Special focus has been put on Kolkata Metropolitan Area. Statistical techniques like z-score and PCA have been employed to work out Standard of Living Deprivation score for all the slums of 53 metropolises. ARC-GIS software is used for making maps. Standard of living has been measured in terms of access to basic amenities, infrastructure and assets like drinking water, sanitation, housing condition, bank account, and so on. Findings: 1. The first finding reveals that migration and urbanization is very high in Greater Mumbai, Delhi, Bangaluru, Chennai, Hyderabad and Kolkata; but slum population is high in Greater Mumbai (50% population live in slums), Meerut, Faridabad, Ludhiana, Nagpur, Kolkata etc. Though the rate of urbanization is high in southern and western states but the percentage of slum population is high in northern states (except Greater Mumbai). 2. Standard of Living also varies widely. Slums of Greater Mumbai and North Indian Cities score fairly high in the index indicating the fact that standard of living is high in those slums compare to the slums in eastern India (Dhanbad, Jamshedpur, Kolkata). Therefore, though Kolkata have relatively lesser percentage of slum population compare to north and south Indian cities but the standard of living in Kolkata’s slums is deplorable. 3. It is interesting to note that even within Kolkata Metropolitan Area slums located in the southern and eastern municipal towns like Rajpur-Sonarpur, Pujali, Diamond Harbour, Baduria and Dankuni have lower standard of living compare to the slums located in the Hooghly Industrial belt like Titagarh, Rishrah, Srerampore etc. Slums of the Hooghly Industrial Belt are older than the slums located in eastern and southern part of the urban agglomeration. 4. Therefore, urban development and emergence of slums should not be the only issue of urban governance but standard of living should be the main focus. Slums located in the main cities like Delhi, Mumbai, Kolkata get more attention from the urban planners and similarly, older slums in a city receives greater political attention compare to the slums of smaller cities and newly emerged slums of the peripheral parts.

Keywords: urbanisation, slum, spatial variation, India

Procedia PDF Downloads 360
128 Heritage, Cultural Events and Promises for Better Future: Media Strategies for Attracting Tourism during the Arab Spring Uprisings

Authors: Eli Avraham

Abstract:

The Arab Spring was widely covered in the global media and the number of Western tourists traveling to the area began to fall. The goal of this study was to analyze which media strategies marketers in Middle Eastern countries chose to employ in their attempts to repair the negative image of the area in the wake of the Arab Spring. Several studies were published concerning image-restoration strategies of destinations during crises around the globe; however, these strategies were not part of an overarching theory, conceptual framework or model from the fields of crisis communication and image repair. The conceptual framework used in the current study was the ‘multi-step model for altering place image’, which offers three types of strategies: source, message and audience. Three research questions were used: 1.What public relations crisis techniques and advertising campaign components were used? 2. What media policies and relationships with the international media were adopted by Arab officials? 3. Which marketing initiatives (such as cultural and sports events) were promoted? This study is based on qualitative content analysis of four types of data: 1) advertising components (slogans, visuals and text); (2) press interviews with Middle Eastern officials and marketers; (3) official media policy adopted by government decision-maker (e.g. boycotting or arresting newspeople); and (4) marketing initiatives (e.g. organizing heritage festivals and cultural events). The data was located in three channels from December 2010, when the events started, to September 31, 2013: (1) Internet and video-sharing websites: YouTube and Middle Eastern countries' national tourism board websites; (2) News reports from two international media outlets, The New York Times and Ha’aretz; these are considered quality newspapers that focus on foreign news and tend to criticize institutions; (3) Global tourism news websites: eTurbo news and ‘Cities and countries branding’. Using the ‘multi-step model for altering place image,’ the analysis reveals that Middle Eastern marketers and officials used three kinds of strategies to repair their countries' negative image: 1. Source (cooperation and media relations; complying, threatening and blocking the media; and finding alternatives to the traditional media) 2. Message (ignoring, limiting, narrowing or reducing the scale of the crisis; acknowledging the negative effect of an event’s coverage and assuring a better future; promotion of multiple facets, exhibitions and softening the ‘hard’ image; hosting spotlight sporting and cultural events; spinning liabilities into assets; geographic dissociation from the Middle East region; ridicule the existing stereotype) and 3. Audience (changing the target audience by addressing others; emphasizing similarities and relevance to specific target audience). It appears that dealing with their image problems will continue to be a challenge for officials and marketers of Middle Eastern countries until the region stabilizes and its regional conflicts are resolved.

Keywords: Arab spring, cultural events, image repair, Middle East, tourism marketing

Procedia PDF Downloads 285
127 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 246
126 Cost-Conscious Treatment of Basal Cell Carcinoma

Authors: Palak V. Patel, Jessica Pixley, Steven R. Feldman

Abstract:

Introduction: Basal cell carcinoma (BCC) is the most common skin cancer worldwide and requires substantial resources to treat. When choosing between indicated therapies, providers consider their associated adverse effects, efficacy, cosmesis, and function preservation. The patient’s tumor burden, infiltrative risk, and risk of tumor recurrence are also considered. Treatment cost is often left out of these discussions. This can lead to financial toxicity, which describes the harm and quality of life reductions inflicted by high care costs. Methods: We studied the guidelines set forth by the American Academy of Dermatology for the treatment of BCC. A PubMed literature search was conducted to identify the costs of each recommended therapy. We discuss costs alongside treatment efficacy and side-effect profile. Results: Surgical treatment for BCC can be cost-effective if the appropriate treatment is selected for the presenting tumor. Curettage and electrodesiccation can be used in low-grade, low-recurrence tumors in aesthetically unimportant areas. The benefits of cost-conscious care are not likely to be outweighed by the risks of poor cosmesis or tumor return ($471 BCC of the cheek). When tumor burden is limited, MMS offers better cure rates and lower recurrence rates than surgical excision, and with comparable costs (MMS $1263; SE $949). Surgical excision with permanent sections may be indicated when tumor burden is more extensive or if molecular testing is necessary. The utility of surgical excision with frozen sections, which costs substantially more than MMS without comparable outcomes, is less clear (SE with frozen sections $2334-$3085). Less data exists on non-surgical treatments for BCC. These techniques cost less, but recurrence-risk is high. Side-effects of nonsurgical treatment are limited to local skin reactions, and cosmesis is good. Cryotherapy, 5-FU, and MAL-PDT are all more affordable than surgery, but high recurrence rates increase risk of secondary financial and psychosocial burden (recurrence rates 21-39%; cost $100-270). Radiation therapy offers better clearance rates than other nonsurgical treatments but is associated with similar recurrence rates and a significantly larger financial burden ($2591-$3460 BCC of the cheek). Treatments for advanced or metastatic BCC are extremely costly, but few patients require their use, and the societal cost burden remains low. Vismodegib and sonidegib have good response rates but substantial side effects, and therapy should be combined with multidisciplinary care and palliative measures. Expert-review has found sonidegib to be the less expensive and more efficacious option (vismodegib $128,358; sonidegib $122,579). Platinum therapy, while not FDA-approved, is also effective but expensive (~91,435). Immunotherapy offers a new line of treatment in patients intolerant of hedgehog inhibitors ($683,061). Conclusion: Dermatologists working within resource-compressed practices and with resource-limited patients must prudently manage the healthcare dollar. Surgical therapies for BCC offer the lowest risk of recurrence at the most reasonable cost. Non-surgical therapies are more affordable, but high recurrence rates increase the risk of secondary financial and psychosocial burdens. Treatments for advanced BCC are incredibly costly, but the low incidence means the overall cost to the system is low.

Keywords: nonmelanoma skin cancer, basal cell skin cancer, squamous cell skin cancer, cost of care

Procedia PDF Downloads 123
125 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 234
124 Exploring Antimicrobial Resistance in the Lung Microbial Community Using Unsupervised Machine Learning

Authors: Camilo Cerda Sarabia, Fernanda Bravo Cornejo, Diego Santibanez Oyarce, Hugo Osses Prado, Esteban Gómez Terán, Belén Diaz Diaz, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Antimicrobial resistance (AMR) represents a significant and rapidly escalating global health threat. Projections estimate that by 2050, AMR infections could claim up to 10 million lives annually. Respiratory infections, in particular, pose a severe risk not only to individual patients but also to the broader public health system. Despite the alarming rise in resistant respiratory infections, AMR within the lung microbiome (microbial community) remains underexplored and poorly characterized. The lungs, as a complex and dynamic microbial environment, host diverse communities of microorganisms whose interactions and resistance mechanisms are not fully understood. Unlike studies that focus on individual genomes, analyzing the entire microbiome provides a comprehensive perspective on microbial interactions, resistance gene transfer, and community dynamics, which are crucial for understanding AMR. However, this holistic approach introduces significant computational challenges and exposes the limitations of traditional analytical methods such as the difficulty of identifying the AMR. Machine learning has emerged as a powerful tool to overcome these challenges, offering the ability to analyze complex genomic data and uncover novel insights into AMR that might be overlooked by conventional approaches. This study investigates microbial resistance within the lung microbiome using unsupervised machine learning approaches to uncover resistance patterns and potential clinical associations. it downloaded and selected lung microbiome data from HumanMetagenomeDB based on metadata characteristics such as relevant clinical information, patient demographics, environmental factors, and sample collection methods. The metadata was further complemented by details on antibiotic usage, disease status, and other relevant descriptions. The sequencing data underwent stringent quality control, followed by a functional profiling focus on identifying resistance genes through specialized databases like Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. Subsequent analyses employed unsupervised machine learning techniques to unravel the structure and diversity of resistomes in the microbial community. Some of the methods employed were clustering methods such as K-Means and Hierarchical Clustering enabled the identification of sample groups based on their resistance gene profiles. The work was implemented in python, leveraging a range of libraries such as biopython for biological sequence manipulation, NumPy for numerical operations, Scikit-learn for machine learning, Matplotlib for data visualization and Pandas for data manipulation. The findings from this study provide insights into the distribution and dynamics of antimicrobial resistance within the lung microbiome. By leveraging unsupervised machine learning, we identified novel resistance patterns and potential drivers within the microbial community.

Keywords: antibiotic resistance, microbial community, unsupervised machine learning., sequences of AMR gene

Procedia PDF Downloads 23
123 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective

Authors: Rasha Elsawy

Abstract:

Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.

Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study

Procedia PDF Downloads 113
122 Climate Change Implications on Occupational Health and Productivity in Tropical Countries: Study Results from India

Authors: Vidhya Venugopal, Jeremiah Chinnadurai, Rebekah A. I. Lucas, Tord Kjellstrom, Bruno Lemke

Abstract:

Introduction: The effects of climate change (CC) are largely discussed across the globe in terms of impacts on the environment and the general population, but the impacts on workers remain largely unexplored. The predicted rise in temperatures and heat events in the CC scenario have health implications on millions of workers in physically exerting jobs. The current health and productivity risks associated with heat exposures are characterized, future risk estimates as temperature rises and recommendations towards developing protective and preventive occupational health and safety guidelines for India are discussed. Methodology: Cross-sectional studies were conducted in several occupational sectors with workers engaged in moderate to heavy labor (n=1580). Quantitative data on heat exposures (WBGT°C), physiological heat strain indicators viz., Core temperature (CBT), Urine specific gravity (USG), Sweat rate (SwR) and qualitative data on heat-related health symptoms and productivity losses were collected. Data were analyzed for associations between heat exposures, health and productivity outcomes related to heat stress. Findings: Heat conditions exceeded the Threshold Limit Value (TLV) for safe manual work in 66% of the workers across several sectors (Avg.WBGT of 28.7°C±3.1°C). Widespread concerns about heat-related health outcomes (86%) were prevalent among workers exposed to high TLVs, with excessive sweating, fatigue and tiredness being commonly reported by workers. The heat stress indicators, core temperature (14%), Sweat rate (8%) and USG (9%), were above normal levels in the study population. A significant association was found between rise in Core Temperatures and WBGT exposures (p=0.000179) Elevated USG and SwR in the worker population indicate moderate dehydration, with potential risks of developing heat-related illnesses. In a steel industry with high heat exposures, an alarming 9% prevalence of kidney/urogenital anomalies was observed in a young workforce. Heat exposures above TLVs were associated with significantly increased odds of various adverse health outcomes (OR=2.43, 95% CI 1.88 to 3.13, p-value = <0.0001) and productivity losses (OR=1.79, 95% CI 1.32 to 2.4, p-value = 0.0002). Rough estimates for the number of workers who would be subjected to higher than TLV levels in the various RCP scenarios are RCP2.6 =79%, RCP4.5 & RCP6 = 81% and at RCP 8.5 = 85%. Rising temperatures due to CC has the capacity to further reduce already compromised health and productivity by subjecting the workers to increased heat exposures in the RCP scenarios are of concern for the country’s occupational health and economy. Conclusion: The findings of this study clearly identify that health protection from hot weather will become increasingly necessary in the Indian subcontinent and understanding the various adaptation techniques needs urgent attention. Further research with a multi-targeted approach to develop strategies for implementing interventions to protect the millions of workers is imperative. Approaches to include health aspects of climate change within sectoral and climate change specific policies should be encouraged, via a number of mechanisms, such as the “Health in All Policies” approach to avert adverse health and productivity consequences as climate change proceeds.

Keywords: heat stress, occupational health, productivity loss, heat strain, adverse health outcomes

Procedia PDF Downloads 323
121 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 444