Search results for: hybrid capture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2939

Search results for: hybrid capture

509 The Impact of Professional Development in the Area of Technology Enhanced Learning on Higher Education Teaching Practices Across Atlantic Technological University – Research Methodology and Preliminary Findings

Authors: Annette Cosgrove

Abstract:

The objectives of this research study is to examine the impact of professional development in Technology Enhanced Learning (TEL) and the digitisation of learning in teaching communities across multiple higher education sites in the ATU (Atlantic Technological University *) ( 2020-2025), including the proposal of an evidence based digital teaching model for use in a future pandemic. The research strategy undertaken for this PhD Study is a multi-site study using mixed methods. Qualitative & quantitative methods are being used in the study to collect data. A pilot study was carried out initially , feedback collected and the research instrument was edited to reflect this feedback, before being administered. The purpose of the staff questionnaire is to evaluate the impact of professional development in the area of TEL, and to capture the practitioners views on the perceived impact on their teaching practice in the higher education sector across ATU (West of Ireland – 5 Higher education locations ). The phenomenon being explored is ‘ the impact of professional development in the area of technology enhanced learning and on teaching practice in a higher education institution.’ The research methodology chosen for this study is an Action based Research Study. The researcher has chosen this approach as it is a prime strategy for developing educational theory and enhancing educational practice . This study includes quantitative and qualitative methods to elicit data which will quantify the impact that continuous professional development in the area of digital teaching practice and technologies has on the practitioner’s teaching practice in higher education. The research instruments / data collection tools for this study include a lecturer survey with a targeted TEL Practice group ( Pre and post covid experience) and semi-structured interviews with lecturers.. This research is currently being conducted across the ATU multisite campus and targeting Higher education lecturers that have completed formal CPD in the area of digital teaching. ATU, a west of Ireland university is the focus of the study , The research questionnaire has been deployed, with 75 respondents to date across the ATU - the primary questionnaire and semi- formal interviews are ongoing currently – the purpose being to evaluate the impact of formal professional development in the area of TEL and its perceived impact on the practitioners teaching practice in the area of digital teaching and learning . This paper will present initial findings, reflections and data from this ongoing research study.

Keywords: TEL, DTL, digital teaching, digital assessment

Procedia PDF Downloads 70
508 Do the Health Benefits of Oil-Led Economic Development Outweigh the Potential Health Harms from Environmental Pollution in Nigeria?

Authors: Marian Emmanuel Okon

Abstract:

Introduction: The Niger Delta region of Nigeria has a vast reserve of oil and gas, which has globally positioned the nation as the sixth largest exporter of crude oil. Production rapidly rose following oil discovery. In most oil producing nations of the world, the wealth generated from oil production and export has propelled economic advancement, enabling the development of industries and other relevant infrastructures. Therefore, it can be assumed that majority of the oil resource such as Nigeria’s, has the potential to improve the health of the population via job creation and derived revenues. However, the health benefits of this economic development might be offset by the environmental consequences of oil exploitation and production. Objective: This research aims to evaluate the balance between the health benefits of oil-led economic development and harmful environmental consequences of crude oil exploitation in Nigeria. Study Design: A pathway has been designed to guide data search and this study. The model created will assess the relationship between oil-led economic development and population health development via job creation, improvement of education, development of infrastructure and other forms of development as well as through harmful environmental consequences from oil activities. Data/Emerging Findings: Diverse potentially suitable datasets which are at different geographical scales have been identified, obtained or applied for and the dataset from the World Bank has been the most thoroughly explored. This large dataset contains information that would enable the longitudinal assessment of both the health benefits and harms from oil exploitation in Nigeria as well as identify the disparities that exist between the communities, states and regions. However, these data do not extend far back enough in time to capture the start of crude oil production. Thus, it is possible that the maximum economic benefits and health harms could be missed. To deal with this shortcoming, the potential for a comparative study with countries like United Kingdom, Morocco and Cote D’ivoire has also been taken into consideration, so as to evaluate the differences between these countries as well as identify the areas of improvement in Nigeria’s environmental and health policies. Notwithstanding, these data have shown some differences in each country’s economic, environmental and health state over time as well as a corresponding summary statistics. Conclusion: In theory, the beneficial effects of oil exploitation to the health of the population may be substantial as large swaths of the ‘wider determinants’ of population heath are influenced by the wealth of a nation. However, if uncontrolled, the consequences from environmental pollution and degradation may outweigh these benefits. Thus, there is a need to address this, in order to improve environmental and population health in Nigeria.

Keywords: environmental pollution, health benefits, oil-led economic development, petroleum exploitation

Procedia PDF Downloads 339
507 Formulation of Hybrid Nanopowder-Molecular Ink for Fabricating Critical Material-Free Cu₂ZnSnS₄ Thin Film Solar Absorber

Authors: Anies Mutiari, Neha Bansal, Martin Artner, Veronika Mayer, Juergen Roth, Mathias Weil, Rachmat Adhi Wibowo

Abstract:

Cu₂ZnSnS₄ (CZTS) compound (mineral name kesterite) has attracted considerable interests for photovoltaic application owing to its optoelectrical properties. Moreover, its elemental abundance in Earth’s crust offers a comparative advantage for envisaged large-scale photovoltaic deployment without any material shortage issues. In this contribution, we present an innovative route to prepare CZTS solar absorber layer for photovoltaic application from low-cost and up-scalable process. CZTS layers were spin coated on the Molybdenum-coated glass from two inks composed of different solvents; dimethylsulfoxide (DMSO) and ultrapure water. Into each solvent; 0.57M CuCl₂, 0.39M ZnCl₂, 0.53M SnCl₂, and 1.85M Thiourea or Na₂S₂O₃, as well as pre-synthesized CZTS nanopowder, were added as sources of Cu, Zn, Sn and S in the ink. The crystallisation of ink into CZTS dense layers was carried out by firstly annealing the as-deposited CZTS layer in open air at 300°C for 1 minute, followed by sulfurisation at 560–620°C under atmospheric pressure for 120 minutes. Complementary electron microscopy, grazing incidence X-ray diffraction and Raman spectroscopy investigations suggest that both solvents can be used for preparing high quality and device relevant CZTS solar absorber layers. The sulphurisation crystallizes the as-deposited CZTS into highly polycrystalline CZTS layer with tetragonal structure demonstrated by the presence of tetrahedrally-shaped grains with the size of 1 µm. An advancement of the CZTS layer preparation was made by gradual substitution of volatile organic compound solvent of DMSO with ultrapure water. It is revealed that by using similar air annealing and sulphurisation process, dense and compact CZTS layers can also be fabricated from an ink with reduced volatile organic compound content.

Keywords: kesterite, solar ink, spin coating, photovoltaics

Procedia PDF Downloads 171
506 Topography Effects on Wind Turbines Wake Flow

Authors: H. Daaou Nedjari, O. Guerri, M. Saighi

Abstract:

A numerical study was conducted to optimize the positioning of wind turbines over complex terrains. Thus, a two-dimensional disk model was used to calculate the flow velocity deficit in wind farms for both flat and complex configurations. The wind turbine wake was assessed using the hybrid methods that combine CFD (Computational Fluid Dynamics) with the actuator disc model. The wind turbine rotor has been defined with a thrust force, coupled with the Navier-Stokes equations that were resolved by an open source computational code (Code_Saturne V3.0 developed by EDF) The simulations were conducted in atmospheric boundary layer condition considering a two-dimensional region located at the north of Algeria at 36.74°N longitude, 02.97°E latitude. The topography elevation values were collected according to a longitudinal direction of 1km downwind. The wind turbine sited over topography was simulated for different elevation variations. The main of this study is to determine the topography effect on the behavior of wind farm wake flow. For this, the wake model applied in complex terrain needs to selects the singularity effects of topography on the vertical wind flow without rotor disc first. This step allows to determine the existence of mixing scales and friction forces zone near the ground. So, according to the ground relief the wind flow waS disturbed by turbulence and a significant speed variation. Thus, the singularities of the velocity field were thoroughly collected and thrust coefficient Ct was calculated using the specific speed. In addition, to evaluate the land effect on the wake shape, the flow field was also simulated considering different rotor hub heights. Indeed, the distance between the ground and the hub height of turbine (Hhub) was tested in a flat terrain for different locations as Hhub=1.125D, Hhub = 1.5D and Hhub=2D (D is rotor diameter) considering a roughness value of z0=0.01m. This study has demonstrated that topographical farm induce a significant effect on wind turbines wakes, compared to that on flat terrain.

Keywords: CFD, wind turbine wake, k-epsilon model, turbulence, complex topography

Procedia PDF Downloads 563
505 Effects of Merging Personal and Social Responsibility with Sports Education Model on Students' Game Performance and Responsibility

Authors: Yi-Hsiang Pan, Chen-Hui Huang, Wei-Ting Hsu

Abstract:

The purposes of the study were to understand these topics as follows: 1. To explore the effect of merging teaching personal and social responsibility (TPSR) with sports education model on students' game performance and responsibility. 2. To explore the effect of sports education model on students' game performance and responsibility. 3. To compare the difference between "merging TPSR with sports education model" and "sports education model" on students' game performance and responsibility. The participants include three high school physical education teachers and six physical education classes. Every teacher teaches an experimental group and a control group. The participants had 121 students, including 65 students in the experimental group and 56 students in the control group. The research methods had game performance assessment, questionnaire investigation, interview, focus group meeting. The research instruments include personal and social responsibility questionnaire and game performance assessment instrument. Paired t-test test and MANCOVA were used to test the difference between "merging TPSR with sports education model" and "sports education model" on students' learning performance. 1) "Merging TPSR with sports education model" showed significant improvements in students' game performance, and responsibilities with self-direction, helping others, cooperation. 2) "Sports education model" also had significant improvements in students' game performance, and responsibilities with effort, self-direction, helping others. 3.) There was no significant difference in game performance and responsibilities between "merging TPSR with sports education model" and "sports education model". 4)."Merging TPSR with sports education model" significantly improve learning atmosphere and peer relationships, it may be developed in the physical education curriculum. The conclusions were as follows: Both "Merging TPSR with sports education model" and "sports education model" can help improve students' responsibility and game performance. However, "Merging TPSR with sports education model" can reduce the competitive atmosphere in highly intensive games between students. The curricular projects of hybrid TPSR-Sport Education model is a good approach for moral character education.

Keywords: curriculum and teaching model, sports self-efficacy, sport enthusiastic, character education

Procedia PDF Downloads 313
504 Kinematical Analysis of Normal Children in Different Age Groups during Gait

Authors: Nawaf Al Khashram, Graham Arnold, Weijie Wang

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—The aim of this study is to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: age group, gait analysis, kinematics, normal children

Procedia PDF Downloads 119
503 Multiaxial Fatigue in Thermal Elastohydrodynamic Lubricated Contacts with Asperities and Slip

Authors: Carl-Magnus Everitt, Bo Alfredsson

Abstract:

Contact mechanics and tribology have been combined with fundamental fatigue and fracture mechanics to form the asperity mechanism which supplies an explanation for the surface-initiated rolling contact fatigue damage, called pitting or spalling. The cracks causing the pits initiates at one surface point and thereafter they slowly grow into the material before chipping of a material piece to form the pit. In the current study, the lubrication aspects on fatigue initiation are simulated by passing a single asperity through a thermal elastohydrodynamic lubricated, TEHL, contact. The physics of the lubricant was described with Reynolds equation and the lubricants pressure-viscosity relation was modeled by Roelands equation, formulated to include temperature dependence. A pressure dependent shear limit was incorporated. To capture the full phenomena of the sliding contact the temperature field was resolved through the incorporation of the energy flow. The heat was mainly generated due to shearing of the lubricant and from dry friction where metal contact occurred. The heat was then transported, and conducted, away by the solids and the lubricant. The fatigue damage caused by the asperities was evaluated through Findley’s fatigue criterion. The results show that asperities, in the size of surface roughness found in applications, may cause surface initiated fatigue damage and crack initiation. The simulations also show that the asperities broke through the lubricant in the inlet, causing metal to metal contact with high friction. When the asperities thereafter moved through the contact, the sliding provided the asperities with lubricant releasing the metal contact. The release of metal contact was possible due to the high viscosity the lubricant obtained from the high pressure. The metal contact in the inlet caused higher friction which increased the risk of fatigue damage. Since the metal contact occurred in the inlet it increased the fatigue risk more for asperities subjected to negative slip than positive slip. Therefore the fatigue evaluations showed that the asperities subjected to negative slip yielded higher fatigue stresses than the asperities subjected to positive slip of equal magnitude. This is one explanation for why pitting is more common in the dedendum than the addendum on pinion gear teeth. The simulations produced further validation for the asperity mechanism by showing that asperities cause surface initiated fatigue and crack initiation.

Keywords: fatigue, rolling, sliding, thermal elastohydrodynamic

Procedia PDF Downloads 121
502 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 341
501 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)

Authors: Antonios Paraskevas, Michael Madas

Abstract:

For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.

Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment

Procedia PDF Downloads 200
500 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory

Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short

Abstract:

Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.

Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling

Procedia PDF Downloads 174
499 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 58
498 Living Together Apart: Gender Differences in Transnational Couple Living Perceptions in the Ghanaian Context

Authors: Rodlyn Remina Hines

Abstract:

Males and Females respond differently to life situations, including transnational living. Being in a transnational marriage relationship may put a strain on the relationship requiring partners to adjust their behaviors and expectancies of the other partner to accommodate the disruptions in the relationship. More so, when one partner is an immigrant to a new geographic location with the other in the native country, these disruptions may be intensive. This qualitative study examined gender differences in how married Ghanaian couples respond to making a life together as a couple while living across international borders. The study asked two questions: (1) What are the perceptions of males and females on transnational living? and (2) how do married males and females respond to transnational living situations? To answer these questions, semi-structured interviews were conducted with 24 married couples- with one partner living in the United States (U.S.) and the other spouse in Ghana via purposive and snowball sampling techniques. Participants were aged 26 to 59 years with an average age of 40; the average age of relationship: 10.41; and average years of living apart: 6.7. Induction and deduction hybrid analysis strategies were used to derive emerging themes. The results highlight significant gender differences in response to transnational living status and practices. The data indicate that transnational couples with the male spouse residing in the U.S. experience more relationship strains than is the case when the female partner is the immigrant. Three couples who were in divorce proceedings at the time of the interview had the male partner residing in the U.S. and the female spouse in Ghana. These gender differences also reflected spousal visitation frequency, duration of spousal reunification, amount of and frequency of spousal remittance(s), and immigration processing procedures. Finally, the data show female immigrant partners as better managers of transnational living stresses and strains than their male counterparts. Findings from this study have implications for marriage and family practitioners and immigration policy makers.

Keywords: gender differences, , ghanaian couples, ghanaian immigrants, transnational living

Procedia PDF Downloads 84
497 Constructing Practices for Lifestyle Journalism Education

Authors: Lucia Vodanovic, Bryan Pirolli

Abstract:

The London College of Communication is one of the only universities in the world to offer a lifestyle journalism master’s degree. A hybrid originally constructed largely out of a generic journalism program crossed with numerous cultural studies approaches, the degree has developed into a leading lifestyle journalism education attracting students worldwide. This research project seeks to present a framework for structuring the degree as well as to understand how students in this emerging field of study value the program. While some researchers have addressed questions about journalism and higher education, none have looked specifically at the increasingly important genre of lifestyle journalism, which Folker Hanusch defines as including notions of consumerism and critique among other identifying traits. Lifestyle journalism, itself poorly researched by scholars, can relate to topics including travel, fitness, and entertainment, and as such, arguably a lifestyle journalism degree should prepare students to engage with these topics. This research uses the existing Masters of Arts and Lifestyle Journalism at the London College of Communications as a case study to examine the school’s approach. Furthering Hanusch’s original definition, this master’s program attempts to characterizes lifestyle journalism by a specific voice or approach, as reflected in the diversity of student’s final projects. This framework echoes the ethos and ideas of the university, which focuses on creativity, design, and experimentation. By analyzing the current degree as well as student feedback, this research aims to assist future educators in pursuing the often neglected field of lifestyle journalism. Through a discovery of the unique mix of practical coursework, theoretical lessons, and broad scope of student work presented in this degree program, researchers strive to develop a framework for lifestyle journalism education, referring to Mark Deuze’s ten questions for journalism education development. While Hanusch began the discussion to legitimize the study of lifestyle journalism, this project strives to go one step further and open up a discussion about teaching of lifestyle journalism at the university level.

Keywords: education, journalism, lifestyle, university

Procedia PDF Downloads 307
496 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 23
495 Pandemic-Era WIC Participation in Delaware, U.S.: Participants' Experiences and Challenges

Authors: McKenna Halverson, Allison Karpyn

Abstract:

Introduction: The COVID-19 pandemic posed unprecedented challenges for families with young children in the United States. The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), a federal nutrition assistance program that provides low-income mothers and young children with access to healthy foods (e.g., infant formula, milk, and peanut butter), mitigated some financial challenges for families. However, the U.S. experienced a national infant formula shortage and rising inflation rates during the pandemic, which likely impacted WIC participants’ shopping experiences and well-being. As such, this study aimed to characterize how the COVID-19 pandemic and related events impacted Delaware WIC participants’ in-store benefit redemption experiences and overall well-being. Method: The authors conducted semi-structured interviews with 51 WIC participants in Wilmington, Delaware. Survey measures included demographic questions and open-ended questions regarding participants’ experiences with WIC benefit redemption during the COVID-19 pandemic. Data were analyzed using a hybrid inductive and deductive coding approach. Findings: The COVID-19 pandemic significantly impacted WIC participants’ shopping experiences and well-being. Specifically, participants were forced to alter their shopping behaviors to account for rising food prices (e.g., used coupons, bought less food, used food banks). Additionally, WIC participants experienced significant distress during the national infant formula shortage resulting from difficulty finding formula to feed their children. Participants also struggled with in-store benefit redemption due to inconsistencies in shelf labelling, the WIC app, and low stock of WIC foods. These findings highlight the need to reexamine WIC operations and emergency food response policy in the United States during times of crisis to optimize public health and ensure federal nutrition assistance programs meeting the needs of low-income families with young children.

Keywords: benefit redemption, COVID-19 pandemic, infant formula shortage, inflation, shopping, WIC

Procedia PDF Downloads 75
494 Organisational Change: The Impact on Employees and Organisational Development

Authors: Maureen Royce, Joshi Jariwala, Sally Kah

Abstract:

Change is inevitable, but the change process is progressive. Organisational change is the process in which an organisation changes strategies, operational methods, systems, culture, and structure to affect something different in the organisation. This process can be continuous or developed over a period and driven by internal and external factors. Organisational change is essential if organisations are to survive in dynamic and uncertain environments. However, evidence from research shows that many change initiatives fail, leading to severe consequences for organisations and their resources. The complex models of third sector organisations, i.e., social enterprise, compounds the levels of change in these organisations. Interestingly, innovation is associated with a change in social enterprises due to the hybridity of product and service development. Furthermore, the creation of social intervention has offered a new process and outcomes to the lifecycle of change. Therefore, different forms of organisational innovation are developed, i.e., total, evolutionary, expansionary, and developmental, which affect the interventions of social enterprises. This raises both theoretical and business concerns on how the competing hybrid nature of social enterprises change, how change is managed, and the impact on these organisations. These perspectives present critical questions for further investigation. In this study, we investigate the impact of organisational change on employees and organisational development at DaDaFest –a disability arts organisation with a social focus based in Liverpool. The three main objectives are to explore the drivers of change and the implementation process; to examine the impact of organisational change on employees and; to identify barriers to organisation change and development. To address the preceding research objectives, qualitative research design is adopted using semi-structured interviews. Data is analysed using a six-step thematic analysis framework, which enables the study to develop themes depicting the impact of change on employees and organisational development. This study presents theoretical and practical contributions for academics and practitioners. The knowledge contributions encapsulate the evolution of change and the change cycle in a social enterprise. However, practical implications provide critical insights into the change management process and the impact of change on employees and organisational development.

Keywords: organisational change, change management, organisational change system, social enterprise

Procedia PDF Downloads 126
493 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 75
492 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 106
491 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 121
490 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 340
489 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 384
488 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions

Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan

Abstract:

Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.

Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec

Procedia PDF Downloads 176
487 Informal Green Infrastructure as Mobility Enabler in Informal Settlements of Quito

Authors: Ignacio W. Loor

Abstract:

In the context of informal settlements in Quito, this paper provides evidence that slopes and deep ravines typical of Andean cities, around which marginalized urban communities sit, constitute a platform for green infrastructure that supports mobility for pedestrians in an incremental fashion. This is informally shaped green infrastructure that provides connectivity to other mobility infrastructures such as roads and public transport, which permits relegated dwellers reach their daily destinations and reclaim their rights to the city. This is relevant in that walking has been increasingly neglected as a viable mean of transport in Latin American cities, in favor of rather motorized means, for which the mobility benefits of green infrastructure have remained invisible to policymakers, contributing to the progressive isolation of informal settlements. This research leverages greatly on an ecological rejuvenation programme led by the municipality of Quito and the Andean Corporation for Development (CAN) intended for rehabilitating the ecological functionalities of ravines. Accordingly, four ravines in different stages of rejuvenation were chosen, in order to through ethnographic methods, capture the practices they support to dwellers of informal settlements across different stages, particularly in terms of issues of mobility. Then, by presenting fragments of interviews, description of observed phenomena, photographs and narratives published in institutional reports and media, the production process of mobility infrastructure over unoccupied slopes and ravines, and the roles that this infrastructure plays in the mobility of dwellers and their quotidian practices are explained. For informal settlements, which normally feature scant urban infrastructure, mobility embodies an unfavourable driver for the possibilities of dwellers to actively participate in the social, economic and political dimensions of the city, for which their rights to the city are widely neglected. Nevertheless, informal green infrastructure for mobility provides some alleviation. This infrastructure is incremental, since its features and usability gradually evolves as users put into it knowledge, labour, devices, and connectivity to other infrastructures in different dimensions which increment its dependability. This is evidenced in the diffusion of knowledge of trails and routes of footpaths among users, the implementation of linking stairs and bridges, the improved access by producing public spaces adjacent to the ravines, the illuminating of surrounding roads, and ultimately, the restoring of ecological functions of ravines. However, the perpetuity of this type of infrastructure is also fragile and vulnerable to the course of urbanisation, densification, and expansion of gated privatised spaces.

Keywords: green infrastructure, informal settlements, urban mobility, walkability

Procedia PDF Downloads 164
486 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 111
485 District 10 in Tehran: Urban Transformation and the Survey Evidence of Loss in Place Attachment in High Rises

Authors: Roya Morad, W. Eirik Heintz

Abstract:

The identity of a neighborhood is inevitably shaped by the architecture and the people of that place. Conventionally the streets within each neighborhood served as a semi-public-private extension of the private living spaces. The street as a design element formed a hybrid condition that was neither totally public nor private, and it encouraged social interactions. Thus through creating a sense of community, one of the most basic human needs of belonging was achieved. Similar to major global cities, Tehran has undergone serious urbanization. Developing into a capital city of high rises has resulted in an increase in urban density. Although allocating more residential units in each neighborhood was a critical response to the population boom and the limited land area of the city, it also created a crisis in terms of social communication and place attachment. District 10 in Tehran is a neighborhood that has undergone the most urban transformation among the other 22 districts in the capital and currently has the highest population density. This paper will explore how the active streets in district 10 have changed into their current condition of high rises with a lack of meaningful social interactions amongst its inhabitants. A residential building can be thought of as a large group of people. One would think that as the number of people increases, the opportunities for social communications would increase as well. However, according to the survey, there is an indirect relationship between the two. As the number of people of a residential building increases, the quality of each acquaintance reduces, and the depth of relationships between people tends to decrease. This comes from the anonymity of being part of a crowd and the lack of social spaces characterized by most high-rise apartment buildings. Without a sense of community, the attachment to a neighborhood is decreased. This paper further explores how the neighborhood participates to fulfill ones need for social interaction and focuses on the qualitative aspects of alternative spaces that can redevelop the sense of place attachment within the community.

Keywords: high density, place attachment, social communication, street life, urban transformation

Procedia PDF Downloads 127
484 A New Co(II) Metal Complex Template with 4-dimethylaminopyridine Organic Cation: Structural, Hirshfeld Surface, Phase Transition, Electrical Study and Dielectric Behavior

Authors: Mohamed dammak

Abstract:

Great attention has been paid to the design and synthesis of novel organic-inorganic compounds in recent decades because of their structural variety and the large diversity of atomic arrangements. In this work, the structure for the novel dimethyl aminopyridine tetrachlorocobaltate (C₇H₁₁N₂)₂CoCl₄ prepared by the slow evaporation method at room temperature has been successfully discussed. The X-ray diffraction results indicate that the hybrid material has a triclinic structure with a P space group and features a 0D structure containing isolated distorted [CoCl₄]2- tetrahedra interposed between [C7H11N²⁻]+ cations forming planes perpendicular to the c axis at z = 0 and z = ½. The effect of the synthesis conditions and the reactants used, the interactions between the cationic planes, and the isolated [CoCl4]2- tetrahedra are employing N-H...Cl and C-H…Cl hydrogen bonding contacts. The inspection of the Hirshfeld surface analysis helps to discuss the strength of hydrogen bonds and to quantify the inter-contacts. A phase transition was discovered by thermal analysis at 390 K, and comprehensive dielectric research was reported, showing a good agreement with thermal data. Impedance spectroscopy measurements were used to study the electrical and dielectric characteristics over a wide range of frequencies and temperatures, 40 Hz–10 MHz and 313–483 K, respectively. The Nyquist plot (Z" versus Z') from the complex impedance spectrum revealed semicircular arcs described by a Cole-Cole model. An electrical circuit consisting of a link of grain and grain boundary elements is employed. The real and imaginary parts of dielectric permittivity, as well as tg(δ) of (C₇H₁₁N₂)₂CoCl₄ at different frequencies, reveal a distribution of relaxation times. The presence of grain and grain boundaries is confirmed by the modulus investigations. Electric and dielectric analyses highlight the good protonic conduction of this material.

Keywords: organic-inorganic, phase transitions, complex impedance, protonic conduction, dielectric analysis

Procedia PDF Downloads 85
483 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 63
482 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 106
481 Perceptions of Pregnant Women on the Transitional Use of Traditional Medicine in the Transitional District Western Uganda

Authors: Demmiele Matu Kiiza, Constantine Steven Labongo Loum, Julaina Obika Asinasi

Abstract:

Background: The use of traditional medicine in Uganda forms the preliminary therapeutic approaches among many people. Traditional medicines have been used in Uganda for many years, not only for the management of pregnancy-related complications but also for the management of other physical and psychological illnesses. Traditional medicines are always considered the first line of treatment by a considerable number of people. This study, therefore, sought to explore the lived experiences of pregnant women by assessing their perceptions of the transitional use of traditional medicine. Methods: Ethnography was used to capture data from an emic perspective. The ethnographic approach involved visiting a few selected pregnant women to observe and participate in the identification of traditional medicines. The ethnographic fieldwork was carried out within a period of three months. In-depth interviews were carried out and audio recorded and later transcribed verbatim. Data was thereafter analyzed thematically. The thematic analysis involved identifying statements made by research participants by transcribing audio and reading through field notes, coding was done, and themes were generated according to commonly mentioned experiences of using traditional medicine. Results: The findings revealed that women performed a ritual of ‘cutting the cord’ by making a small horizontal incision on the belly across the linea Nigra (also known as a pregnancy line) at around six months of pregnancy to avoid producing a baby with an umbilical cord tied around the baby’s neck. They also used crushed egg shells, crushed snail shells and herbs such as pawpaw roots, Entarahompo (crassocephalum vitelline), Ekyoganyanja (Erlangea tomentose), to manage Omushohokye (a term used by the study participants to refer to a situation where women pass out too much water when giving birth, producing a child with mold and oozing out of a milky liquid through the breasts before giving births); prepare for safe delivery and also to manage pregnancy-related complications. The study recommends the implementation of a traditional medicine use policy using a bottom-up approach. Designing and implementing of culturally sensitive maternal healthcare intervention programs and involving village health teams and the elderly in health education.

Keywords: traditional medicine, pregnant women, uganda, perceptions

Procedia PDF Downloads 96
480 Integration of Agile Philosophy and Scrum Framework to Missile System Design Processes

Authors: Misra Ayse Adsiz, Selim Selvi

Abstract:

In today's world, technology is competing with time. In order to catch up with the world's companies and adapt quickly to the changes, it is necessary to speed up the processes and keep pace with the rate of change of the technology. The missile system design processes, which are handled with classical methods, keep behind in this race. Because customer requirements are not clear, and demands are changing again and again in the design process. Therefore, in the system design process, a methodology suitable for the missile system design dynamics has been investigated and the processes used for catching up the era are examined. When commonly used design processes are analyzed, it is seen that any one of them is dynamic enough for today’s conditions. So a hybrid design process is established. After a detailed review of the existing processes, it is decided to focus on the Scrum Framework and Agile Philosophy. Scrum is a process framework. It is focused on to develop software and handling change management with rapid methods. In addition, agile philosophy is intended to respond quickly to changes. In this study, it is aimed to integrate Scrum framework and agile philosophy, which are the most appropriate ways for rapid production and change adaptation, into the missile system design process. With this approach, it is aimed that the design team, involved in the system design processes, is in communication with the customer and provide an iterative approach in change management. These methods, which are currently being used in the software industry, have been integrated with the product design process. A team is created for system design process. The roles of Scrum Team are realized with including the customer. A scrum team consists of the product owner, development team and scrum master. Scrum events, which are short, purposeful and time-limited, are organized to serve for coordination rather than long meetings. Instead of the classic system design methods used in product development studies, a missile design is made with this blended method. With the help of this design approach, it is become easier to anticipate changing customer demands, produce quick solutions to demands and combat uncertainties in the product development process. With the feedback of the customer who included in the process, it is worked towards marketing optimization, design and financial optimization.

Keywords: agile, design, missile, scrum

Procedia PDF Downloads 168