Search results for: NSGA-II Constraints handling.
296 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators
Authors: Radwa Mabrook
Abstract:
Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.Keywords: collaborative culture, content creation, experimental culture, virtual reality
Procedia PDF Downloads 127295 Clostridium thermocellum DBT-IOC-C19, A Potential CBP Isolate for Ethanol Production
Authors: Nisha Singh, Munish Puri, Collin Barrow, Deepak Tuli, Anshu S. Mathur
Abstract:
The biological conversion of lignocellulosic biomass to ethanol is a promising strategy to solve the present global crisis of exhausting fossil fuels. The existing bioethanol production technologies have cost constraints due to the involvement of mandate pretreatment and extensive enzyme production steps. A unique process configuration known as consolidated bioprocessing (CBP) is believed to be a potential cost-effective process due to its efficient integration of enzyme production, saccharification, and fermentation into one step. Due to several favorable reasons like single step conversion, no need of adding exogenous enzymes and facilitated product recovery, CBP has gained the attention of researchers worldwide. However, there are several technical and economic barriers which need to be overcome for making consolidated bioprocessing a commercially viable process. Finding a natural candidate CBP organism is critically important and thermophilic anaerobes are preferred microorganisms. The thermophilic anaerobes that can represent CBP mainly belong to genus Clostridium, Caldicellulosiruptor, Thermoanaerobacter, Thermoanaero bacterium, and Geobacillus etc. Amongst them, Clostridium thermocellum has received increased attention as a high utility CBP candidate due to its highest growth rate on crystalline cellulose, the presence of highly efficient cellulosome system and ability to produce ethanol directly from cellulose. Recently with the availability of genetic and molecular tools aiding the metabolic engineering of Clostridium thermocellum have further facilitated the viability of commercial CBP process. With this view, we have specifically screened cellulolytic and xylanolytic thermophilic anaerobic ethanol producing bacteria, from unexplored hot spring/s in India. One of the isolates is a potential CBP organism identified as a new strain of Clostridium thermocellum. This strain has shown superior avicel and xylan degradation under unoptimized conditions compared to reported wild type strains of Clostridium thermocellum and produced more than 50 mM ethanol in 72 hours from 1 % avicel at 60°C. Besides, this strain shows good ethanol tolerance and growth on both hexose and pentose sugars. Hence, with further optimization this new strain could be developed as a potential CBP microbe.Keywords: Clostridium thermocellum, consolidated bioprocessing, ethanol, thermophilic anaerobes
Procedia PDF Downloads 400294 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139293 Understanding Complexity at Pre-Construction Stage in Project Planning of Construction Projects
Authors: Mehran Barani Shikhrobat, Roger Flanagan
Abstract:
The construction planning and scheduling based on using the current tools and techniques is resulted deterministic in nature (Gantt chart, CPM) or applying a very little probability of completion (PERT) for each task. However, every project embodies assumptions and influences and should start with a complete set of clearly defined goals and constraints that remain constant throughout the duration of the project. Construction planners continue to apply the traditional methods and tools of “hard” project management that were developed for “ideal projects,” neglecting the potential influence of complexity on the design and construction process. The aim of this research is to investigate the emergence and growth of complexity in project planning and to provide a model to consider the influence of complexity on the total project duration at the post-contract award pre-construction stage of a project. The literature review showed that complexity originates from different sources of environment, technical, and workflow interactions. They can be divided into two categories of complexity factors, first, project tasks, and second, project organisation management. Project tasks may originate from performance, lack of resources, or environmental changes for a specific task. Complexity factors that relate to organisation and management refer to workflow and interdependence of different parts. The literature review highlighted the ineffectiveness of traditional tools and techniques in planning for complexity. However, this research focus on understanding the fundamental causes of the complexity of construction projects were investigated through a questionnaire with industry experts. The results were used to develop a model that considers the core complexity factors and their interactions. System dynamics were used to investigate the model to consider the influence of complexity on project planning. Feedback from experts revealed 20 major complexity factors that impact project planning. The factors are divided into five categories known as core complexity factors. To understand the weight of each factor in comparison, the Analytical Hierarchy Process (AHP) analysis method is used. The comparison showed that externalities are ranked as the biggest influence across the complexity factors. The research underlines that there are many internal and external factors that impact project activities and the project overall. This research shows the importance of considering the influence of complexity on the project master plan undertaken at the post-contract award pre-construction phase of a project.Keywords: project planning, project complexity measurement, planning uncertainty management, project risk management, strategic project scheduling
Procedia PDF Downloads 138292 Qualitative Modeling of Transforming Growth Factor Beta-Associated Biological Regulatory Network: Insight into Renal Fibrosis
Authors: Ayesha Waqar Khan, Mariam Altaf, Jamil Ahmad, Shaheen Shahzad
Abstract:
Kidney fibrosis is an anticipated outcome of possibly all types of progressive chronic kidney disease (CKD). Epithelial-mesenchymal transition (EMT) signaling pathway is responsible for production of matrix-producing fibroblasts and myofibroblasts in diseased kidney. In this study, a discrete model of TGF-beta (transforming growth factor) and CTGF (connective tissue growth factor) was constructed using Rene Thomas formalism to investigate renal fibrosis turn over. The kinetic logic proposed by Rene Thomas is a renowned approach for modeling of Biological Regulatory Networks (BRNs). This modeling approach uses a set of constraints which represents the dynamics of the BRN thus analyzing the pathway and predicting critical trajectories that lead to a normal or diseased state. The molecular connection between TGF-beta, Smad 2/3 (transcription factor) phosphorylation and CTGF is modeled using GenoTech. The order of BRN is CTGF, TGF-B, and SMAD3 respectively. The predicted cycle depicts activation of TGF-B (TGF-β) via cleavage of its own pro-domain (0,1,0) and presentation to TGFR-II receptor phosphorylating SMAD3 (Smad2/3) in the state (0,1,1). Later TGF-B is turned off (0,0,1) thereby activating SMAD3 that further stimulates the expression of CTGF in the state (1,0,1) and itself turns off in (1,0,0). Elevated CTGF expression reactivates TGF-B (1,1,0) and the cycle continues. The predicted model has generated one cycle and two steady states. Cyclic behavior in this study represents the diseased state in which all three proteins contribute to renal fibrosis. The proposed model is in accordance with the experimental findings of the existing diseased state. Extended cycle results in enhanced CTGF expression through Smad2/3 and Smad4 translocation in the nucleus. The results suggest that the system converges towards organ fibrogenesis if CTGF remains constructively active along with Smad2/3 and Smad 4 that plays an important role in kidney fibrosis. Therefore, modeling regulatory pathways of kidney fibrosis will escort to the progress of therapeutic tools and real-world useful applications such as predictive and preventive medicine.Keywords: CTGF, renal fibrosis signaling pathway, system biology, qualitative modeling
Procedia PDF Downloads 179291 Approximate Spring Balancing for Swimming Pool Lift Mechanism to Reduce Actuator Torque
Authors: Apurva Patil, Sujatha Srinivasan
Abstract:
Reducing actuator loads is important for applications in which human effort is required for actuation. The potential benefit of applying spring balancing to rehabilitation devices which work against gravity on a nonhorizontal plane is well recognized, but practical applications have been elusive. Although existing methods provide exact spring balance, they require additional masses or auxiliary links, or all the springs used originate from the ground, which makes the resulting device bulky and space-inefficient. This paper uses a method of static balancing of mechanisms with conservative loads such as gravity and spring loads using non-zero-free-length springs and no auxiliary links. Application of this method to a manually operated swimming pool lift mechanism which lowers and raises the physically challenged users into or out of the swimming pool is presented here. Various possible configurations using extension and compression springs as well as gas spring in the mechanism are compared. This work involves approximate spring balancing of the mechanism using minimization of potential energy variance. It uses the approach of flattening the potential energy distribution over the workspace and fuses it with numerical optimization. The results show the considerable reduction in actuator torque requirement with practical spring design and arrangement. Although the method provides only an approximate balancing, it is versatile, flexible in choosing appropriate control variables that are relevant to the design problem and easy to implement. The true potential of this technique lies in the fact that it uses a very simple optimization to find the spring constant, free length of the spring and the optimal attachment points subject to the optimization constraints. Also, it uses physically realizable non-zero-free-length springs directly, thereby reducing the complexity involved in simulating zero-free-length springs from non-zero-free-length springs. This method allows springs to be attached inside the mechanism, which makes the implementation of spring balancing practical. Because auxiliary linkages can be avoided, the resultant swimming pool lift mechanism is compact. The cost benefits and reduced complexity can be significant advantages in the development of this user-actuated swimming pool lift for developing countries.Keywords: gas spring, rehabilitation device, spring balancing, swimming pool lift
Procedia PDF Downloads 241290 Experimental Setup of Corona Discharge on Dye Degradation for Science Education
Authors: Shivam Dubey, Vinit Srivastava, Abhay Singh Thakur, Rahul Vaish
Abstract:
The presence of organic dyes in water is a critical issue that poses a significant threat to the environment and human health. We have investigated the use of corona discharge as a potential method for degrading organic dyes in water. Methylene Blue dye was exposed to corona discharge, and its photo-absorbance was measured over time to determine the extent of degradation. The results depicted a decreased absorbance for the dye and the loss of the characteristic colour of methylene blue. The effects of various parameters, including current, voltage, gas phase, salinity, and electrode spacing, on the reaction rates, were investigated. The highest reaction rates were observed at the highest current and voltage (up to 10kV), lowest salinity, smallest electrode spacing, and an environment containing enhanced levels of oxygen. These findings have possible applications for science education curriculum. By investigating the use of corona discharge for destroying organic dyes, we can provide students with a practical application of scientific principles that they can apply to real-world problems. This research can demonstrate the importance of understanding the chemical and physical properties of organic dyes and the effects of corona discharge on their degradation and provide a holistic understanding of the applications of scientific research. Moreover, our study also emphasizes the importance of considering the various parameters that can affect reaction rates. By investigating the effects of current, voltage, matter phase, salinity, and electrode spacing, we can provide students with an opportunity to learn about the importance of experimental design and how to evade constraints that can limit meaningful results. In conclusion, this study has the potential to provide valuable insights into the use of corona discharge for destroying organic dyes in water and has significant implications for science education. By highlighting the practical applications of scientific principles, experimental design, and the importance of considering various parameters, this research can help students develop critical thinking skills and prepare them for future careers in science and engineering.Keywords: dye degradation, corona discharge, science education, hands-on learning, chemical education
Procedia PDF Downloads 69289 Exploring Augmented Reality Applications for UNESCO World Heritage Sites in Greece: Addressing Purpose, Scenarios, Platforms, and Visitor Impact
Authors: A. Georgiou, A. Galani, A. Karatza, G. E. Bampasidis
Abstract:
Augmented Reality (AR) technology has become integral in enhancing visitor experiences at Greece's UNESCO World Heritage Sites. This research meticulously investigates various facets of AR applications/games associated with these revered sites. The cultural heritage represents the identity of each nation in the world. Technology can breathe life into this identity. Through Augmented Reality (AR), individuals can travel back in time, visit places they cannot access in real life, discover the history of these places, and live unique experiences. The study examines the objectives and intended goals behind the development and deployment of each augmented reality application/game pertaining to the UNESCO World Heritage Sites in Greece. It thoroughly analyzes the scenarios presented within these AR games/applications, examining how historical narratives, interactive elements, and cultural context are incorporated to engage users. Furthermore, the research identifies and assesses the technological platforms utilized for the development and implementation of these AR experiences, encompassing mobile devices, AR headsets, or specific software frameworks. It classifies and examines the types of augmented reality employed within these applications/games, including marker-based, markerless, location-based, or immersive AR experiences. Evaluation of the benefits accrued by visitors engaging with these AR applications/games, such as enhanced learning experiences, improved cultural understanding, and heightened engagement with the heritage sites, forms a crucial aspect of this study. Additionally, the research scrutinizes potential drawbacks or limitations associated with the AR applications/games, considering technological barriers, user accessibility issues, or constraints affecting user experience. By thoroughly investigating these pivotal aspects, this research aims to provide a comprehensive overview and analysis of the landscape of augmented reality applications/games linked to the UNESCO World Heritage Sites in Greece. The findings seek to contribute nuanced insights into the effectiveness, challenges, and opportunities associated with leveraging AR technology for heritage site preservation, visitor engagement, and cultural enrichment.Keywords: augmented reality, AR applications, UNESCO sites, cultural heritage, Greece, visitor engagement, historical narratives
Procedia PDF Downloads 63288 Triploid Rainbow Trout (Oncorhynchus mykiss) for Better Aquaculture and Ecological Risk Management
Authors: N. N. Pandey, Raghvendra Singh, Biju S. Kamlam, Bipin K. Vishwakarma, Preetam Kala
Abstract:
The rainbow trout (Oncorhynchus mykiss) is an exotic salmonid fish, well known for its fast growth, tremendous ability to thrive in diverse conditions, delicious flesh and hard fighting nature in Europe and other countries. Rainbow trout farming has a great potential for its contribution to the mainstream economy of Himalayan states in India and other temperate countries. These characteristics establish them as one of the most widely introduced and cultured fish across the globe, and its farming is also prominent in the cold water regions of India. Nevertheless, genetic fatigue, slow growth, early maturity, and low productivity are limiting the expansion of trout production. Moreover, farms adjacent to natural streams or other water sources are subject to escape of domesticated rainbow trout into the wild, which is a serious environmental concern as the escaped fish is subject to contaminate and disrupt the receiving ecosystem. A decline in production traits due to early maturity prolongs the culture duration and affects the profit margin of rainbow trout farms in India. A viable strategy that could overcome these farming constraints in large scale operation is the production of triploid fish that are sterile and more heterozygous. For better triploidy induction rate (TR), heat shock at 28°C for 10 minutes and pressure shock 9500 psi pressure for 5 minutes is applied to green eggs with 90-100% of triploidy success and 72-80% survival upto swim-up fry stage. There is 20% better growth in aquaculture with triploids rainbow trout over diploids. As compared to wild diploid fish, larger sized and fitter triploid rainbow trout in natural waters attract to trout anglers, and support the development of recreational fisheries by state fisheries departments without the risk of contaminating existing gene pools and disrupting local fish diversity. Overall, enhancement of productivity in rainbow trout farms and trout production in coldwater regions, development of lucrative trout angling and better ecological management is feasible with triploid rainbow trout.Keywords: rainbow trout, triploids fish, heat shock, pressure shock, trout angling
Procedia PDF Downloads 124287 A Stepwise Approach for Piezoresistive Microcantilever Biosensor Optimization
Authors: Amal E. Ahmed, Levent Trabzon
Abstract:
Due to the low concentration of the analytes in biological samples, the use of Biological Microelectromechanical System (Bio-MEMS) biosensors for biomolecules detection results in a minuscule output signal that is not good enough for practical applications. In response to this, a need has arisen for an optimized biosensor capable of giving high output signal in response the detection of few analytes in the sample; the ultimate goal is being able to convert the attachment of a single biomolecule into a measurable quantity. For this purpose, MEMS microcantilevers based biosensors emerged as a promising sensing solution because it is simple, cheap, very sensitive and more importantly does not need analytes optical labeling (Label-free). Among the different microcantilever transducing techniques, piezoresistive based microcantilever biosensors became more prominent because it works well in liquid environments and has an integrated readout system. However, the design of piezoresistive microcantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. It was found that the parameters that can be optimized to enhance the sensitivity of Piezoresistive microcantilever-based sensors are: cantilever dimensions, cantilever material, cantilever shape, piezoresistor material, piezoresistor doping level, piezoresistor dimensions, piezoresistor position, Stress Concentration Region's (SCR) shape and position. After a systematic analyzation of the effect of each design and process parameters on the sensitivity, a step-wise optimization approach was developed in which almost all these parameters were variated one at each step while fixing the others to get the maximum possible sensitivity at the end. At each step, the goal was to optimize the parameter in a way that it maximizes and concentrates the stress in the piezoresistor region for the same applied force thus get the higher sensitivity. Using this approach, an optimized sensor that has 73.5x times higher electrical sensitivity (ΔR⁄R) than the starting sensor was obtained. In addition to that, this piezoresistive microcantilever biosensor it is more sensitive than the other similar sensors previously reported in the open literature. The mechanical sensitivity of the final senior is -1.5×10-8 Ω/Ω ⁄pN; which means that for each 1pN (10-10 g) biomolecules attach to this biosensor; the piezoresistor resistivity will decrease by 1.5×10-8 Ω. Throughout this work COMSOL Multiphysics 5.0, a commercial Finite Element Analysis (FEA) tool, has been used to simulate the sensor performance.Keywords: biosensor, microcantilever, piezoresistive, stress concentration region (SCR)
Procedia PDF Downloads 571286 Understanding the Cultural Landscape of Kuttanad: Life within the Constraints of Nature
Authors: K. Nikilsha, Lakshmi Manohar, Debayan Chatterjee
Abstract:
Landscape is a setting that informs the way of life of a set of people, and the repository of intangible values and human meanings that nurture our very existence. Along with the linkage that it forms with our lives, it can be argued that landscape and memory cannot be separated, as landscape is the nucleus of our memories. In this context, this paper studies landscape evolution of a region with unique geographic setting, where the dependency of the inhabitants on its resources, led to the formation of certain peculiar beliefs and taboos that formed the basis of a set of unwritten rules and guidelines which they still follow as a part of their lifestyle. One such example is Kuttanad, a low lying region in Kerala which is a complex mosaic of fragmented agricultural landscape incorporating coastal backwaters, rivers, marshes, paddy fields and water channels. The more the physical involvement with the resources, the more was the inhabitants attachment towards it. This attachment of the inhabitants to the place is very strong because the creation of this land was the result of the toil of the low caste labourers who strived day and night to create Kuttanad, which was reclaimed from water with the help of the finance supplied by their landlords. However, the greatest challenge faced by them is posed by the forces of water in the form of floods. As this land is fed by five rivers, even the slight variation in rainfall in its watershed area can cause a large imbalance in the water level causing the reclaimed land to be inundated. The effects of climate change including increase in rainfall, rise in sea level and change of seasons can act as a catalyst to this damage. Hasty urbanization has led to the conversion of paddy fields to housing plots and coconut/plantain fields giving no regard to the traditional systems which had once respected nature and combated floods and draughts through the various cultural practices and taboos practiced by the people. Thus it is essential to look back at the landscape evolution of Kuttanad and to recognise methods used traditionally in the region to establish a cultural landscape, and to understand how climate change and urbanisation shall pose a challenge to the existing landscape and lifestyle. This research also explores the possibilities of alternative and sustainable approaches for resilient urban development learned from Kuttanad as a case study.Keywords: ecological conservation, landscape and ecological engineering, landscape evolution, man-made landscapes
Procedia PDF Downloads 266285 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures
Authors: Haytam Kasem
Abstract:
The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model
Procedia PDF Downloads 239284 Screening Maize for Compatibility with F. Oxysporum to Enhance Striga asiatica (L.) Kuntze Resistance
Authors: Admire Isaac Tichafa Shayanowako, Mark Laing, Hussein Shimelis
Abstract:
Striga asiatica is among the leading abiotic constraints to maize production under small-holder farming communities in southern African. However, confirmed sources of resistance to the parasitic weed are still limited. Conventional breeding programmes have been progressing slowly due to the complex nature of the inheritance of Striga resistance, hence there is a need for more innovative approaches. This study aimed to achieve partial resistance as well as to breed for compatibility with Fusarium oxysporum fsp strigae, a soil fungus that is highly specific in its pathogenicity. The agar gel and paper roll assays in conjunction with a glass house pot trial were done to select genotypes based on their potential to stimulate germination of Striga and to test the efficacy of Fusarium oxysporum as a biocontrol agent. Results from agar gel assays showed a moderate to high potential in the release of Strigalactones among the 33 OPVs. Maximum Striga germination distances from the host root of 1.38 cm and up to 46% germination were observed in most of the populations. Considerable resistance was observed in a landrace ‘8lines’ which had the least Striga germination percentage (19%) with a maximum distance of 0.93 cm compared to the resistant check Z-DPLO-DTC1 that had 23% germination at a distance of 1.4cm. The number of fusarium colony forming units significantly deferred (P < 0.05) amongst the genotypes growing between germination papers. The number of crown roots, length of primary root and fresh weight of shoot and roots were highly correlated with concentration of fusarium macrospore counts. Pot trials showed significant differences between the fusarium coated and the uncoated treatments in terms of plant height, leaf counts, anthesis-silks intervals, Striga counts, Striga damage rating and Striga vigour. Striga emergence counts and Striga flowers were low in fusarium treated pots. Plants in fusarium treated pots had non-significant differences in height with the control treatment. This suggests that foxy 2 reduces the impact of Striga damage severity. Variability within fusarium treated genotypes with respect to traits under evaluation indicates the varying degree of compatibility with the biocontrol.Keywords: maize, Striga asiaitca, resistance, compatibility, F. oxysporum
Procedia PDF Downloads 250283 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study
Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly
Abstract:
In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions
Procedia PDF Downloads 82282 Fostering Creativity in Education Exploring Leadership Perspectives on Systemic Barriers to Innovative Pedagogy
Authors: David Crighton, Kelly Smith
Abstract:
The ability to adopt creative pedagogical approaches is increasingly vital in today’s educational landscape. This study examines the institutional barriers that hinder educators, in the UK, from embracing such innovation, focusing specifically on the experiences and perspectives of educational leaders. Current literature primarily focuses on the challenges that academics and teachers encounter, particularly highlighting how management culture and audit processes negatively affect their ability to be creative in classrooms and lecture theatres. However, this focus leaves a gap in understanding management perspectives, which is crucial for providing a more holistic insight into the challenges encountered in educational settings. To explore this gap, we are conducting semi-structured interviews with senior leaders across various educational contexts, including universities, schools, and further education colleges. This qualitative methodology, combined with thematic analysis, aims to uncover the managerial, financial, and administrative pressures these leaders face in fostering creativity in teaching and supporting professional learning opportunities. Preliminary insights indicate that educational leaders face significant barriers, such as institutional policies, resource limitations, and external performance indicators. These challenges create a restrictive environment that stifles educators' creativity and innovation. Addressing these barriers is essential for empowering staff to adopt more creative pedagogical approaches, ultimately enhancing student engagement and learning outcomes. By alleviating these constraints, educational leaders can cultivate a culture that fosters creativity and flexibility in the classroom. These insights will inform practical recommendations to support institutional change and enhance professional learning opportunities, contributing to a more dynamic educational environment. In conclusion, this study offers a timely exploration of how leadership can influence the pedagogical landscape in a rapidly evolving educational context. The research seeks to highlight the crucial role that educational leaders play in shaping a culture of creativity and adaptability, ensuring that institutions are better equipped to respond to the challenges of contemporary education.Keywords: educational leadership, professional learning, creative pedagogy, marketisation
Procedia PDF Downloads 12281 Efficient Field-Oriented Motor Control on Resource-Constrained Microcontrollers for Optimal Performance without Specialized Hardware
Authors: Nishita Jaiswal, Apoorv Mohan Satpute
Abstract:
The increasing demand for efficient, cost-effective motor control systems in the automotive industry has driven the need for advanced, highly optimized control algorithms. Field-Oriented Control (FOC) has established itself as the leading approach for motor control, offering precise and dynamic regulation of torque, speed, and position. However, as energy efficiency becomes more critical in modern applications, implementing FOC on low-power, cost-sensitive microcontrollers pose significant challenges due to the limited availability of computational and hardware resources. Currently, most solutions rely on high-performance 32-bit microcontrollers or Application-Specific Integrated Circuits (ASICs) equipped with Floating Point Units (FPUs) and Hardware Accelerated Units (HAUs). These advanced platforms enable rapid computation and simplify the execution of complex control algorithms like FOC. However, these benefits come at the expense of higher costs, increased power consumption, and added system complexity. These drawbacks limit their suitability for embedded systems with strict power and budget constraints, where achieving energy and execution efficiency without compromising performance is essential. In this paper, we present an alternative approach that utilizes optimized data representation and computation techniques on a 16-bit microcontroller without FPUs or HAUs. By carefully optimizing data point formats and employing fixed-point arithmetic, we demonstrate how the precision and computational efficiency required for FOC can be maintained in resource-constrained environments. This approach eliminates the overhead performance associated with floating-point operations and hardware acceleration, providing a more practical solution in terms of cost, scalability and improved execution time efficiency, allowing faster response in motor control applications. Furthermore, it enhances system design flexibility, making it particularly well-suited for applications that demand stringent control over power consumption and costs.Keywords: field-oriented control, fixed-point arithmetic, floating point unit, hardware accelerator unit, motor control systems
Procedia PDF Downloads 15280 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy
Authors: Paul R Armstrong
Abstract:
Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.Keywords: NIR, haploids, maize, sorting
Procedia PDF Downloads 302279 A Retrospective Cohort Study on an Outbreak of Gastroenteritis Linked to a Buffet Lunch Served during a Conference in Accra
Authors: Benjamin Osei Tutu, Sharon Annison
Abstract:
On 21st November, 2016, an outbreak of foodborne illness occurred after a buffet lunch served during a stakeholders’ consultation meeting held in Accra. An investigation was conducted to characterise the affected people, determine the etiologic food, the source of contamination and the etiologic agent and to implement appropriate public health measures to prevent future occurrences. A retrospective cohort study was conducted via telephone interviews, using a structured questionnaire developed from the buffet menu. A case was defined as any person suffering from symptoms of foodborne illness e.g. diarrhoea and/or abdominal cramps after eating food served during the stakeholder consultation meeting in Accra on 21st November, 2016. The exposure status of all the members of the cohort was assessed by taking the food history of each respondent during the telephone interview. The data obtained was analysed using Epi Info 7. An environmental risk assessment was conducted to ascertain the source of the food contamination. Risks of foodborne infection from the foods eaten were determined using attack rates and odds ratios. Data was obtained from 54 people who consumed food served during the stakeholders’ meeting. Out of this population, 44 people reported with symptoms of food poisoning representing 81.45% (overall attack rate). The peak incubation period was seven hours with a minimum and maximum incubation periods of four and 17 hours, respectively. The commonly reported symptoms were diarrhoea (97.73%, 43/44), vomiting (84.09%, 37/44) and abdominal cramps (75.00%, 33/44). From the incubation period, duration of illness and the symptoms, toxin-mediated food poisoning was suspected. The environmental risk assessment of the implicated catering facility indicated a lack of time/temperature control, inadequate knowledge on food safety among workers and sanitation issues. Limited number of food samples was received for microbiological analysis. Multivariate analysis indicated that illness was significantly associated with the consumption of the snacks served (OR 14.78, P < 0.001). No stool and blood or samples of etiologic food were available for organism isolation; however, the suspected etiologic agent was Staphylococcus aureus or Clostridium perfringens. The outbreak could probably be due to the consumption of unwholesome snack (tuna sandwich or chicken. The contamination and/or growth of the etiologic agent in the snack may be due to the breakdown in cleanliness, time/temperature control and good food handling practices. Training of food handlers in basic food hygiene and safety is recommended.Keywords: Accra, buffet, conference, C. perfringens, cohort study, food poisoning, gastroenteritis, office workers, Staphylococcus aureus
Procedia PDF Downloads 230278 Assessment of Nuclear Medicine Radiation Protection Practices Among Radiographers and Nurses at a Small Nuclear Medicine Department in a Tertiary Hospital
Authors: Nyathi Mpumelelo; Moeng Thabiso Maria
Abstract:
BACKGROUND AND OBJECTIVES: Radiopharmaceuticals are used for diagnosis, treatment, staging and follow up of various diseases. However, there is concern that the ionizing radiation (gamma rays, α and ß particles) emitted by radiopharmaceuticals may result in exposure of radiographers and nurses with limited knowledge of the principles of radiation protection and safety, raising the risk of cancer induction. This study aimed at investigation radiation safety awareness levels among radiographers and nurses at a small tertiary hospital in South Africa. METHODS: An analytical cross-sectional study. A validated two-part questionnaire was implemented to consenting radiographers and nurses working in a Nuclear Medicine Department. Part 1 gathered demographic information (age, gender, work experience, attendance to/or passing ionizing radiation protection courses). Part 2 covered questions related to knowledge and awareness of radiation protection principles. RESULTS: Six radiographers and five nurses participated (27% males and 73% females). The mean age was 45 years (age range 20-60 years). The study revealed that neither professional development courses nor radiation protection courses are offered at the Nuclear Medicine Department understudy. However, 6/6 (100%) radiographers exhibited a high level of awareness of radiation safety principles on handling and working with radiopharmaceuticals which correlated to their years of experience. As for nurses, 4/5 (80%) showed limited knowledge and awareness of radiation protection principles irrespective of the number of years in the profession. CONCLUSION: Despite their major role of caring for patients undergoing diagnostic and therapeutic treatments, the nurses showed limited knowledge of ionizing radiation and associated side effects. This was not surprising since they never received any formal basic radiation safety course. These findings were not unique to this Centre. A study conducted in a Kuwaiti Radiology Department also established that the vast majority of nurses did not understand the risks of working with ionizing radiation. Similarly, nurses in an Australian hospital exhibited knowledge limitations. However, nursing managers did provide the necessary radiation safety training when requested. In Guatemala and Saudi Arabia, where there was shortage of professional radiographers, nurses underwent radiography training, a course that equipped them with basic radiation safety principles. The radiographers in the Centre understudy unlike others in various parts of the world demonstrated substantial knowledge and awareness on radiation protection. Radiations safety courses attended when an opportunity arose played a critical role in their awareness. The knowledge and awareness levels of these radiographers were comparable to their counterparts in Sudan. However, it was much more above that of their counterparts in Jordan, Nigeria, Nepal and Iran who were found to have limited awareness and inadequate knowledge on radiation dose. Formal radiation safety and awareness courses and workshops can play a crucial role in raising the awareness of nurses and radiographers on radiation safety for their personal benefit and that of their patients.Keywords: radiation safety, radiation awareness, training, nuclear medicine
Procedia PDF Downloads 79277 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27276 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89275 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 127274 Cooperation of Unmanned Vehicles for Accomplishing Missions
Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin
Abstract:
The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning
Procedia PDF Downloads 128273 Using a Mobile App to Foster Children Active Travel to School in Spain
Authors: P. Pérez-Martín, G. Pedrós, P. Martínez-Jiménez, M. Varo-Martínez
Abstract:
In recent decades, family habits related to children’s displacements to school have changed, increasing motorized travels against active modes. This entails a major negative impact on the urban environment, road safety in cities and the physical and psychological development of children. One of the more common actions used to reverse this trend is Walking School Bus (WSB), which consists of a predefined adult-scorted pedestrian route to school with several stops along the path where schoolchildren are collected. At Tirso de Molina School in Cordoba (Spain), a new ICT-based methodology to deploy WSB has been tested. A mobile app that allows the geoposition of the group, the notification of the arrival and real-time communication between the WSB participants have been presented to the families in order to organize and register the daily participation. After an initial survey to know the travel mode and the spatial distribution of the interested families, three WSB routes have been established and the families have been trained in the app usage. During nine weeks, 33 children have joined the WSB and their parents have accompanied the groups in turns. A high recurrence in the attendance has been registered. Through a final survey, participants have valued highly the tool and the methodology designed, emphasizing as most useful features of the mobile app: notifications system, chat and real-time monitoring. It has also been found that the tool has had a major impact on the degree of confidence of parents regarding the autonomous on foot displacement of their children to school. Moreover, 37,9% of the participant families have reported a total or partial modal shift from car to walking, and the benefits more reported are an increment of the parents available time and less problems in the travel to school daily organization. As a consequence, It has been proved the effectiveness of this user-centric innovative ICT-based methodology to reduce the levels of private car drop offs, minimize barriers of time constraints, volunteer recruitment, and parents’ safety concerns, while, at the same time, increase convenience and time savings for families. This pilot study can offer guidance for community coordinated actions and local authority interventions to support sustainable school travel outcomes.Keywords: active travel, mobile app, sustainable mobility, urban transportation planning, walking school bus
Procedia PDF Downloads 336272 Emerging Barriers And Enablers Of Digital Inclusion For Students With Disabilities In Ethiopian Education
Authors: Merih Welay Welesilassie
Abstract:
This research investigated the factors influencing digital inclusion for young students with disabilities in Ethiopian schools. In this context, socio-economic, infrastructural, and cultural challenges amplify educational disparities. In the era of digital technology's pivotal role in education, it is crucial to ensure equitable access for students with disabilities. Nevertheless, obstacles like inadequate infrastructure, insufficient teacher training, and economic constraints impede the incorporation of digital tools in educational environments, especially for marginalised groups. This study employed an explanatory sequential mixed-methods approach involving data collection through a survey administered to 300 students. Subsequently, in-depth interviews were conducted with 30 participants to provide comprehensive insights into their experiences. The quantitative analysis uncovered that students with disabilities have limited support for digital readiness, find digital technologies less accessible, and perceive digital tools as less easy to use. The study revealed that economic barriers, such as the high cost of devices and limited internet access, prevent students from fully utilising digital resources. Furthermore, infrastructural challenges, such as unreliable electricity and poor internet connectivity, exacerbate the issue. The qualitative data provided a more profound understanding by emphasising social and attitudinal obstacles, including a lack of empathy from both peers and educators, exclusion from participatory digital tasks, and enduring negative stereotypes regarding disabilities. The research highlights the importance of implementing interventions to enhance digital accessibility for students with disabilities. Essential suggestions encompass refining teacher training programs to effectively facilitate inclusive education, improving digital infrastructure, and offering financial assistance to procure digital tools. Furthermore, implementing policy reforms and public awareness campaigns is crucial to cultivate a cultural shift and nurture a more inclusive societal atmosphere. This study yields valuable perspectives on the digital inclusion scenario in Ethiopia, laying the groundwork for prospective research endeavours to narrow the digital gap for students with disabilities.Keywords: digital inclussion, students with disabilities, ethiopian education, barries and access
Procedia PDF Downloads 20271 Barriers of the Development and Implementation of Health Information Systems in Iran
Authors: Abbas Sheikhtaheri, Nasim Hashemi
Abstract:
Health information systems have great benefits for clinical and managerial processes of health care organizations. However, identifying and removing constraints and barriers of implementing and using health information systems before any implementation is essential. Physicians are one of the main users of health information systems, therefore, identifying the causes of their resistance and concerns about the barriers of the implementation of these systems is very important. So the purpose of this study was to determine the barriers of the development and implementation of health information systems in terms of the Iranian physicians’ perspectives. In this study conducted in 8 selected hospitals affiliated to Tehran and Iran Universities of Medical Sciences, Tehran, Iran in 2014, physicians (GPs, residents, interns, specialists) in these hospitals were surveyed. In order to collect data, a research made questionnaire was used (Cronbach’s α = 0.95). The instrument included 25 about organizational (9), personal (4), moral and legal (3) and technical barriers (9). Participants were asked to answer the questions using 5 point scale Likert (completely disagree=1 to completely agree=5). By using a simple random sampling method, 200 physicians (from 600) were invited to study that eventually 163 questionnaires were returned. We used mean score and t-test and ANOVA to analyze the data using SPSS software version 17. 52.1% of respondents were female. The mean age was 30.18 ± 7.29. The work experience years for most of them were between 1 to 5 years (80.4 percent). The most important barriers were organizational ones (3.4 ± 0.89), followed by ethical (3.18 ± 0.98), technical (3.06 ± 0.8) and personal (3.04 ± 1.2). Lack of easy access to a fast Internet (3.67±1.91) and the lack of exchanging information (3.61±1.2) were the most important technical barriers. Among organizational barriers, the lack of efficient planning for the development and implementation systems (3.56±1.32) and was the most important ones. Lack of awareness and knowledge of health care providers about the health information systems features (3.33±1.28) and the lack of physician participation in planning phase (3.27±1.2) as well as concerns regarding the security and confidentiality of health information (3.15 ± 1.31) were the most important personal and ethical barriers, respectively. Women (P = 0.02) and those with less experience (P = 0.002) were more concerned about personal barriers. GPs also were more concerned about technical barriers (P = 0.02). According to the study, technical and ethics barriers were considered as the most important barriers however, lack of awareness in target population is also considered as one of the main barriers. Ignoring issues such as personal and ethical barriers, even if the necessary infrastructure and technical requirements were provided, may result in failure. Therefore, along with the creating infrastructure and resolving organizational barriers, special attention to education and awareness of physicians and providing solution for ethics concerns are necessary.Keywords: barriers, development health information systems, implementation, physicians
Procedia PDF Downloads 345270 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 385269 Psychosocial Support in Disaster Situations in the Philippines and Indonesia: A Critical Literature Review
Authors: Fuad Hamsyah
Abstract:
Since last two decades, major disasters have happened in the Philippines and Indonesia as two countries that are located in the pacific ring of fire territory. While in Southeast Asian countries, the process of psychosocial support provision is facing various constraints such as limited number of mental health professionals and the limited knowledge about the provision of psychosocial support for disaster survivors. Yet after the tsunami disaster in 2004, many Asian countries begin to develop policies about the provision of psychosocial interventions as an effort for future disasters preparedness. In addition, mental health professionals have to consider the local cultural values and beliefs in order to provide people with effective psychosocial support since cultural values and beliefs play a significant role in the diversity of psychological distress that forms symptoms formation, and people’s way to seek for psychological assistance. This study is a critical literature review on 130 relevant selected documents and literatures. IASC MHPSS guideline is used as the research framework in doing critical analysis. The purpose of this study is to conduct a critical analysis on the mental health and psychosocial support provision in the Philippines and Indonesia with three main objectives: 1) To describe strengths, weaknesses, and challenges in the process of psychosocial supports given by public and private organizations in emergency settings of disaster in the Philippines and Indonesia, 2) To compare psychosocial support practices between the Philippines and Indonesia, and to identify the good practices among these countries, 3) To learn how cultural values influence the implementation of psychosocial supports in emergency settings of disaster. This research indicated that almost every function from IASC MHPSS guidelines has been implemented effectively in the Philippines and Indonesia, yet not in every detail of IASC MHPSS guidelines. Several similarities and differences are indicated in this study also based on the IASC MHPSS guidelines as the analysis framework. Further, both countries have some good practices that can be useful as an example of a comprehensive psychosocial support implementation. Apart from the IASC MHPSS guideline, cultural values and beliefs in the Philippines such as kanya-kanya syndrome, pakikipakapwa, utang na loob, bahala na, pagkaya are indicated as several cultural values that have strong influences towards people’s attitude and behavior in disaster situations. While in Indonesia, several cultural values such as sabar and nrimo become two important attitudes to cope disaster situations.Keywords: disaster, Indonesia, psychosocial support, Philippines
Procedia PDF Downloads 395268 Assessing Gender Mainstreaming Practices in the Philippine Basic Education System
Authors: Michelle Ablian Mejica
Abstract:
Female drop-outs due to teenage pregnancy and gender-based violence in schools are two of the most contentious and current gender-related issues faced by the Department of Education (DepEd) in the Philippines. The country adopted gender mainstreaming as the main strategy to eliminate gender inequalities in all aspects of the society including education since 1990. This research examines the extent and magnitude by which gender mainstreaming is implemented in the basic education from the national to the school level. It seeks to discover the challenges faced by the central and field offices, particularly by the principals who served as decision-makers in the schools where teaching and learning take place and where opportunities that may aggravate, conform and transform gender inequalities and hierarchies exist. The author conducted surveys and interviews among 120 elementary and secondary principals in the Division of Zambales as well as selected gender division and regional focal persons within Region III- Central Luzon. The study argues that DepEd needs to review, strengthen and revitalize its gender mainstreaming because the efforts do not penetrate the schools and are not enough to lessen or eliminate gender inequalities within the schools. The study found out some of the major challenges in the implementation of gender mainstreaming as follows: absence of a national gender-responsive education policy framework, lack of gender responsive assessment and monitoring tools, poor quality of gender and development related training programs and poor data collection and analysis mechanism. Furthermore, other constraints include poor coordination mechanism among implementing agencies, lack of clear implementation strategy, ineffective or poor utilization of GAD budget and lack of teacher and learner centered GAD activities. The paper recommends the review of the department’s gender mainstreaming efforts to align with the mandate of the agency and provide gender responsive teaching and learning environment. It suggests that the focus must be on formulation of gender responsive policies and programs, improvement of the existing mechanism and conduct of trainings focused on gender analysis, budgeting and impact assessment not only for principals and GAD focal point system but also to parents and other school stakeholders.Keywords: curriculum and instruction, gender analysis, gender budgeting, gender impact assessment
Procedia PDF Downloads 344267 Re-Development and Lost Industrial History: Darling Harbour of Sydney
Authors: Ece Kaya
Abstract:
Urban waterfront re-development is a well-established phenomenon internationally since 1960s. In cities throughout the world, old industrial waterfront land is being redeveloped into luxury housing, offices, tourist attractions, cultural amenities and shopping centres. These developments are intended to attract high-income residents, tourists and investors to the city. As urban waterfronts are iconic places for the cities and catalyst for further development. They are often referred as flagship project. In Sydney, the re-development of industrial waterfront has been exposed since 1980s with Darling Harbour Project. Darling Harbour waterfront used to be the main arrival and landing place for commercial and industrial shipping until 1970s. Its urban development has continued since the establishment of the city. It was developed as a major industrial and goods-handling precinct in 1812. This use was continued by the mid-1970s. After becoming a redundant industrial waterfront, the area was ripe for re-development in 1984. Darling Harbour is now one of the world’s fascinating waterfront leisure and entertainment destinations and its transformation has been considered as a success story. It is a contradictory statement for this paper. Data collection was carried out using an extensive archival document analysis. The data was obtained from Australian Institute of Architects, City of Sydney Council Archive, Parramatta Heritage Office, Historic Houses Trust, National Trust, and University of Sydney libraries, State Archive, State Library and Sydney Harbour Foreshore Authority Archives. Public documents, primarily newspaper articles and design plans, were analysed to identify possible differences in motives and to determine the process of implementation of the waterfront redevelopments. It was also important to obtain historical photographs and descriptions to understand how the waterfront had been altered. Sites maps in different time periods have been identified to understand what kind of changes happened on the urban landscape and how the developments affected areas. Newspaper articles and editorials have been examined in order to discover what aspects of the projects reflected the history and heritage. The thematic analysis of the archival data helped determine Darling Harbour is a historically important place as it had represented a focal point for Sydney's industrial growth and the cradle of industrial development in European Australia. It has been found that the development area was designated in order to be transformed to a place for tourist, education, recreational, entertainment, cultural and commercial activities and as a result little evidence remained of its industrial past. This paper aims to discuss the industrial significance of Darling Harbour and to explain the changes on its industrial landscape. What is absent now is the layer of its history that creates the layers of meaning to the place so its historic industrial identity is effectively lost.Keywords: historical significance, industrial heritage, industrial waterfront, re-development
Procedia PDF Downloads 301