Search results for: traffic noise
134 Slope Instability Study Using Kinematic Analysis and Lineament Density Mapping along a Part of National Highway 58, Uttarakhand, India
Authors: Kush Kumar, Varun Joshi
Abstract:
Slope instability is a major problem of the mountainous region, especially in parts of the Indian Himalayan Region (IHR). The on-going tectonic, rugged topography, steep slope, heavy precipitation, toe erosion, structural discontinuities, and deformation are the main triggering factors of landslides in this region. Besides the loss of life, property, and infrastructure caused by a landslide, it also results in various environmental problems, i.e., degradation of slopes, land use, river quality by increased sediments, and loss of well-established vegetation. The Indian state of Uttarakhand, being a part of the active Himalayas, also faces numerous cases of slope instability. Therefore, the vulnerable landslide zones need to be delineated to safeguard various losses. The study area is focused in Garhwal and Tehri -Garhwal district of Uttarakhand state along National Highway 58, which is a strategic road and also connects the four important sacred pilgrims (Char Dham) of India. The lithology of these areas mainly comprises of sandstone, quartzite of Chakrata formation, and phyllites of Chandpur formation. The greywacke and sandstone rock of Saknidhar formation dips northerly and is overlain by phyllite of Chandpur formation. The present research incorporates the lineament density mapping using remote sensing satellite data supplemented by a detailed field study via kinematic analysis. The DEM data of ALOS PALSAR (12.5 m resolution) is resampled to 10 m resolution and used for preparing various thematic maps such as slope, aspect, drainage, hill shade, lineament, and lineament density using ARCGIS 10.6 software. Furthermore, detailed field mapping, including structural mapping, geomorphological mapping, is integrated for kinematic analysis of the slope using Dips 6.0 software of Rockscience. The kinematic analysis of 40 locations was carried out, among which 15 show the planar type of failure, five-show wedge failure, and rest, 20 show no failures. The lineament density map is overlapped with the location of the unstable slope inferred from kinematic analysis to infer the association of the field information and remote sensing derived information, and significant compatibility was observed. With the help of the present study, location-specific mitigation measures could be suggested. The mitigation measures would be helping in minimizing the probability of slope instability, especially during the rainy season, and reducing the hampering of road traffic.Keywords: Indian Himalayan Region, kinematic analysis, lineament density mapping, slope instability
Procedia PDF Downloads 138133 Transformation of the Relationship Between Tourism Activities and Residential Environment in the Center of a Historical Suburban City of a Tourism Metropolis: A Case Study of Naka-Uji Area, Uji City, Kyoto Prefecture
Authors: Shuailing Cui, Nakajiam Naoto
Abstract:
The tourism industry has experienced significant growth worldwide since the end of World War II. Tourists are drawn to suburban areas during weekends and holidays to explore historical and cultural heritage sites. Since the 1970s, there has been a resurgence in population growth in metropolitan areas, which has fueled the demand for suburban tourism and facilitated its development. The construction of infrastructure, such as railway lines and arterial roads, has also supported the growth of tourism. Tourists engaging in various activities can have a significant impact on the destinations they visit. Tourism has not only affected the local economy but has also begun to alter the social structures, culture, and lifestyle of the destinations visited. In addition, the growing number of tourists has affected the local commercial structure and daily life of suburban residents. Therefore, there is a need to figure out how tourism activities influence the residential environment of the tourist destination and how this influence changes over time. This study aims to analyze the transformation of the relationship between tourism activities and the residential environment in the Naka-Uji area of Uji City, Kyoto Prefecture. Specifically, it investigates how the growth of the tourism industry has influenced the local residential environment and how this influence has changed over time. The findings of the study indicate that the growth of tourism in the Naka-Uji area has had both positive and negative effects on the local residential environment. On the one hand, the tourism industry has created job opportunities and improved local economic conditions. On the other hand, it has also caused environmental degradation, particularly in terms of increased traffic and the construction of parking lots. The study also found that the development of the tourism industry has influenced the social structures, culture, and lifestyle of residents. For instance, the increase in the number of tourists has led to changes in the commercial structure and daily life of suburban residents. The study highlights the importance of collaboration and shared benefits among stakeholders in tourism development, particularly in terms of preserving the cultural and natural heritage of tourist destinations while promoting sustainable development. Overall, this study contributes to the growing body of research on the impact of tourism on suburban areas. It provides insights into the complex relationships between tourism, the natural environment, the local economy, and residential life and emphasizes the need for sustainable tourism development in suburban areas. The findings of this study have important implications for policymakers, urban planners, and other stakeholders involved in promoting regional revitalization and sustainable tourism development.Keywords: tourism, residential environment, suburban area, metropolis
Procedia PDF Downloads 96132 Transformation of the Relationship between Tourism Activities and Residential Environment in the Center of a Historical Suburban City of a Tourism Metropolis: A Case Study of Naka-Uji Area, Uji City, Kyoto Prefecture
Authors: Shuailing CUI, Nakajima Naoto
Abstract:
The tourism industry has experienced significant growth worldwide since the end of World War II. Tourists are drawn to suburban areas during weekends and holidays to explore historical and cultural heritage sites. Since the 1970s, there has been a resurgence in population growth in metropolitan areas, which has fueled the demand for suburban tourism and facilitated its development. The construction of infrastructure, such as railway lines and arterial roads, has also supported the growth of tourism. Tourists engaging in various activities can have a significant impact on the destinations they visit. Tourism has not only affected the local economy but has also begun to alter the social structures, culture, and lifestyle of the destinations visited. In addition, the growing number of tourists has affected the local commercial structure and daily life of suburban residents. Therefore, there is a need to figure out how tourism activities influence the residential environment of the tourist destination and how this influence changes over time. This study aims to analyze the transformation of the relationship between tourism activities and the residential environment in the Naka-Uji area of Uji City, Kyoto Prefecture. Specifically, it investigates how the growth of the tourism industry has influenced the local residential environment and how this influence has changed over time. The findings of the study indicate that the growth of tourism in the Naka-Uji area has had both positive and negative effects on the local residential environment. On the one hand, the tourism industry has created job opportunities and improved local economic conditions. On the other hand, it has also caused environmental degradation, particularly in terms of increased traffic and the construction of parking lots. The study also found that the development of the tourism industry has influenced the social structures, culture, and lifestyle of residents. For instance, the increase in the number of tourists has led to changes in the commercial structure and daily life of suburban residents. The study highlights the importance of collaboration and shared benefits among stakeholders in tourism development, particularly in terms of preserving the cultural and natural heritage of tourist destinations while promoting sustainable development. Overall, this study contributes to the growing body of research on the impact of tourism on suburban areas. It provides insights into the complex relationships between tourism, the natural environment, the local economy, and residential life, and emphasizes the need for sustainable tourism development in suburban areas. The findings of this study have important implications for policymakers, urban planners, and other stakeholders involved in promoting regional revitalization and sustainable tourism development.Keywords: tourism, residential environment, suburban area, metropolis
Procedia PDF Downloads 70131 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka
Authors: Champika Amarasinghe
Abstract:
Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.Keywords: working environment, Covid 19, occupational diseases, occupational accidents
Procedia PDF Downloads 88130 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 324129 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 161128 Analysis of Superconducting and Optical Properties in Atomic Layer Deposition and Sputtered Thin Films for Next-Generation Single-Photon Detectors
Authors: Nidhi Choudhary, Silke A. Peeters, Ciaran T. Lennon, Dmytro Besprozvannyy, Harm C. M. Knoops, Robert H. Hadfield
Abstract:
Superconducting Nanowire Single Photon Detectors (SNSPDs) have become leading devices in quantum optics and photonics, known for their exceptional efficiency in detecting single photons from ultraviolet to mid-infrared wavelengths with minimal dark counts, low noise, and reduced timing jitter. Recent advancements in materials science focus attention on refractory metal thin films such as NbN and NbTiN to enhance the optical properties and superconducting performance of SNSPDs, opening the way for next-generation detectors. These films have been deposited by several different techniques, such as atomic layer deposition (ALD), plasma pro-advanced plasma processing (ASP) and magnetron sputtering. The fabrication flexibility of these films enables precise control over morphology, crystallinity, stoichiometry and optical properties, which is crucial for optimising the SNSPD performance. Hence, it is imperative to study the optical and superconducting properties of these materials across a wide range of wavelengths. This study provides a comprehensive analysis of the optical and superconducting properties of some important materials in this category (NbN, NbTiN) by different deposition methods. Using Variable angle ellipsometry spectroscopy (VASE), we measured the refractive index, extinction, and absorption coefficient across a wide wavelength range (200-1700 nm) to enhance light confinement for optical communication devices. The critical temperature and sheet resistance were measured using a four-probe method in a custom-built, cryogen-free cooling system with a Sumitomo RDK-101D cold head and CNA-11C compressor. Our results indicate that ALD-deposited NbN shows a higher refractive index and extinction coefficient in the near-infrared region (~1500 nm) than sputtered NbN of the same thickness. Further, the analysis of the optical properties of plasma pro-ASP deposited NbTiN was performed at different substrate bias voltages and different thicknesses. The analysis of substrate bias voltage indicates that the maximum value of the refractive index and extinction coefficient observed for the substrate biasing of 50-80 V across a substrate bias range of (0 V - 150 V). The optical properties of sputtered NbN films are also investigated in terms of the different substrate temperatures during deposition (100 °C-500 °C). We find the higher the substrate temperature during deposition, the higher the value of the refractive index and extinction coefficient has been observed. In all our superconducting thin films ALD-deposited NbN films possess the highest critical temperature (~12 K) compared to sputtered (~8 K) and plasma pro-ASP (~5 K).Keywords: optical communication, thin films, superconductivity, atomic layer deposition (ALD), niobium nitride (NbN), niobium titanium nitride (NbTiN), SNSPD, superconducting detector, photon-counting.
Procedia PDF Downloads 29127 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method
Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image
Procedia PDF Downloads 314126 Application of Acoustic Emissions Related to Drought Can Elicit Antioxidant Responses and Capsaicinoids Content in Chili Pepper Plants
Authors: Laura Helena Caicedo Lopez, Luis Miguel Contreras Medina, Ramon Gerardo Guevara Gonzales, Juan E. Andrade
Abstract:
In this study, we evaluated the effect of three different hydric stress conditions: Low (LHS), medium (MHS), and high (HHS) on capsaicinoid content and enzyme regulation of C. annuum plants. Five main peaks were detected using a 2 Hz resolution vibrometer laser (Polytec-B&K). These peaks or “characteristic frequencies” were used as acoustic emissions (AEs) treatment, transforming these signals into audible sound with the frequency (Hz) content of each hydric stress. Capsaicinoids (CAPs) are the main, secondary metabolites of chili pepper plants and are known to increase during hydric stress conditions or short drought-periods. The AEs treatments were applied in two plant stages: the first one was in the pre-anthesis stage to evaluate the genes that encode the transcription of enzymes responsible for diverse metabolic activities of C. annuum plants. For example, the antioxidant responses such as peroxidase (POD), superoxide dismutase (Mn-SOD). Also, phenyl-alanine ammonia-lyase (PAL) involved in the biosynthesis of the phenylpropanoid compounds. The chalcone synthase (CHS) related to the natural defense mechanisms and species-specific aquaporin (CAPIP-1) that regulate the flow of water into and out of cells. The second stage was at 40 days after flowering (DAF) to evaluate the biochemical effect of AEs related to hydric stress on capsaicinoids production. These two experiments were conducted to identify the molecular responses of C. annuum plants to AE. Moreover, to define AEs could elicit any increase in the capsaicinoids content after a one-week exposition to AEs treatments. The results show that all AEs treatment signals (LHS, MHS, and HHS) were significantly different compared to the non-acoustic emission control (NAE). Also, the AEs induced the up-regulation of POD (~2.8, 2.9, and 3.6, respectively). The gene expression of another antioxidant response was particularly treatment-dependent. The HHS induced and overexpression of Mn-SOD (~0.23) and PAL (~0.33). As well, the MHS only induced an up-regulation of the CHs gene (~0.63). On the other hand, CAPIP-1 gene gas down-regulated by all AEs treatments LHS, MHS, and HHS ~ (-2.4, -0.43 and -6.4, respectively). Likewise, the down-regulation showed particularities depending on the treatment. LHS and MHS induced downregulation of the SOD gene ~ (-1.26 and -1.20 respectively) and PAL (-4.36 and 2.05, respectively). Correspondingly, the LHS and HHS showed the same tendency in the CHs gene, respectively ~ (-1.12 and -1.02, respectively). Regarding the elicitation effect of AE on the capsaicinoids content, additional treatment controls were included. A white noise treatment (WN) to prove the frequency-selectiveness of signals and a hydric stressed group (HS) to compare the CAPs content. Our findings suggest that WN and NAE did not present differences statically. Conversely, HS and all AEs treatments induced a significant increase of capsaicin (Cap) and dihydrocapsaicin (Dcap) after one-week of a treatment. Specifically, the HS plants showed an increase of 8.33 times compared to the NAE and WN treatments and 1.4 times higher than the MHS, which was the AEs treatment with a larger induction of Capsaicinoids among treatments (5.88) and compared to the controls.Keywords: acoustic emission, capsaicinoids, elicitors, hydric stress, plant signaling
Procedia PDF Downloads 171125 Coupling Strategy for Multi-Scale Simulations in Micro-Channels
Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier
Abstract:
With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling
Procedia PDF Downloads 167124 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 101123 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.Keywords: contaminated sediment, monitoring, recovery, remediation
Procedia PDF Downloads 236122 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics
Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez
Abstract:
In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.Keywords: data analysis, emotional domotics, performance improvement, neural network
Procedia PDF Downloads 140121 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries
Authors: M. J. Eichner, Y. S. Sarhan
Abstract:
Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.Keywords: sustainable, architecture, urbanization, urban transformation, vertical village
Procedia PDF Downloads 124120 Fast Detection of Local Fiber Shifts by X-Ray Scattering
Authors: Peter Modregger, Özgül Öztürk
Abstract:
Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination
Procedia PDF Downloads 63119 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 31118 Identifying the Effects of the Rural Demographic Changes in the Northern Netherlands: A Holistic Approach to Create Healthier Environment
Authors: A. R. Shokoohi, E. A. M. Bulder, C. Th. van Alphen, D. F. den Hertog, E. J. Hin
Abstract:
The Northern region of the Netherlands has beautiful landscapes, a nice diversity of green and blue areas, and dispersed settlements. However, some recent population changes can become threats to health and wellbeing in these areas. The rural areas in the three northern provinces -Groningen, Friesland, and Drenthe, see youngsters leave the region for which reason they are aging faster than other regions in the Netherlands. As a result, some villages have faced major population decline that is leading to loss of facilities/amenities and a decrease in accessibility and social cohesion. Those who still live in these villages are relatively old, low educated and have low-income. To develop a deeper understanding of the health status of the people living in these areas, and help them to improve their living environment, the GO!-Method is being applied in this study. This method has been developed by the National Institute for Public Health and the Environment (RIVM) of the Netherlands and is inspired by the broad definition of health by Machteld Huber: the ability to adapt and direct control, in terms of the physical, emotional and social challenges of life, while paying extra attention to vulnerable groups. A healthy living environment is defined as an environment that residents find it pleasant and encourages and supports healthy behavior. The GO!-method integrates six domains that constitute a healthy living environment: health and lifestyle, facilities and development, safety and hygiene, social cohesion and active citizens, green areas, and air and noise pollution. First of all, this method will identify opportunities for a healthier living environment using existing information and perceptions of residents and other local stakeholders in order to strengthen social participation and quality of life in these rural areas. Second, this approach will connect identified opportunities with available and effective evidence-based interventions in order to develop an action plan from the residents and local authorities perspective which will help them to design their municipalities healthier and more resilient. This method is being used for the first time in rural areas to our best knowledge, in close collaboration with the residents and local authorities of the three provinces to create a sustainable process and stimulate social participation. Our paper will present the outcomes of the first phase of this project in collaboration with the municipality of Westerkwartier, located in the northwest of the province of Groningen. And will describe the current situation, and identify local assets, opportunities, and policies relating to healthier environment; as well as needs and challenges to achieve goals. The preliminary results show that rural demographic changes in the northern Netherlands have negative impacts on service provisions and social cohesion, and there is a need to understand this complicated situation and improve the quality of life in those areas.Keywords: population decline, rural areas, healthy environment, Netherlands
Procedia PDF Downloads 96117 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286116 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy
Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi
Abstract:
The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.Keywords: aerosol, biomass burning, seacoast, urban area
Procedia PDF Downloads 128115 Exploring the Success of Live Streaming Commerce in China: A Literature Analysis
Authors: Ming Gao, Matthew Tingchi Liu, Hoi Ngan Loi
Abstract:
Live streaming refers to the video contents generated by broadcasters and shared with viewers in real-time by uploading them to short-video platforms. In recent years, individual KOL broadcasters have successfully made use of live streams to sell a large amount of goods to the consumers. For example, Wei Ya, the Number 1 broadcaster in Taobao Live, sold products worth RMB 2.7 billion (USD 0.38 billion) in 2018. Regarding the success of live streaming commerce (LSC) in China, this study explores the elements of the booming LSC industry and attempts to explain the reasons behind its prosperity. A systematic review of industry reports and academic papers was conducted to summarize the latest findings in this field. And the results of this investigation showed that a live streaming eco-system has been established by the LSC players, namely, the platform, the broadcaster, the product supplier, and the viewer. In this eco-system, all players have complementary advantages and needs, and their close cooperation leads to a win-win situation. For instance, platforms and broadcasters have abundant internet traffic, which needs to be monetized, while product suppliers have mature supply chains and the need of promoting the products. In addition, viewers are attached to the LSC platforms to get product information, bargains, and entertainment. This study highlights the importance of the mass-personal hybrid communication nature of live streaming because its interpersonal communication feature increases consumers’ positive experiences, while its mass media broadcasting feature facilitates product promotion. Another innovative point of this study lies in its inclusion of the special characteristic of Chinese Internet culture - entertainment. The entertaining genres of the live streams created by broadcasters serve as down-to-earth approaches to reach their audiences easily. Further, the nature of video, i.e., the dynamic and salient stimulus, is emphasized in this study. Since video is more engaging, it can attract viewers in a quick and easy way. Meanwhile, the abundant, interesting, high-quality, and free short videos have added “stickiness” to platforms by retaining users and prolonging their staying time on the platforms. In addition, broadcasters’ important characters, such as physical attractiveness, humor, sex appeal, kindness, communication skills, and interactivity, are also identified as important factors that influence consumers’ engagement and purchase intention. In conclusion, all players have their own proper places in this live streaming eco-system, in which they work seamlessly to give full play to their respective advantages, with each player taking what it needs and offering what it has. This has contributed to the success of live streaming commerce in China.Keywords: broadcasters, communication, entertainment, live streaming commerce, viewers
Procedia PDF Downloads 122114 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities
Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard
Abstract:
INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification
Procedia PDF Downloads 163113 Strategies for Public Space Utilization
Authors: Ben Levenger
Abstract:
Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.Keywords: downtown, economic development, planning, strategic
Procedia PDF Downloads 81112 Budgetary Performance Model for Managing Pavement Maintenance
Authors: Vivek Hokam, Vishrut Landge
Abstract:
An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.Keywords: budget, maintenance, deterioration, priority
Procedia PDF Downloads 207111 Research on Land Use Pattern and Employment-Housing Space of Coastal Industrial Town Based on the Investigation of Liaoning Province, China
Authors: Fei Chen, Wei Lu, Jun Cai
Abstract:
During the Twelve Five period, China promulgated industrial policies promoting the relocation of energy-intensive industries to coastal areas in order to utilize marine shipping resources. Consequently, some major state-owned steel and gas enterprises have relocated and resulted in a large-scale coastal area development. However, some land may have been over-exploited with seamless coastline projects. To balance between employment and housing, new industrial coastal towns were constructed to support the industrial-led development. In this paper, we adopt a case-study approach to closely examine the development of several new industrial coastal towns of Liaoning Province situated in the Bohai Bay area, which is currently under rapid economic growth. Our investigations reflect the common phenomenon of long distance commuting and a massive amount of vacant residences. More specifically, large plant relocation caused hundreds of kilometers of daily commute and enterprises had to provide housing subsidies and education incentives to motivate employees to relocate to coastal areas. Nonetheless, many employees still refuse to relocate due to job stability, diverse needs of family members and access to convenient services. These employees averaged 4 hours of commute daily and some who lived further had to reside in temporary industrial housing units and subject to long-term family separation. As a result, only a small portion of employees purchase new coastal residences but mostly for investment and retirement purposes, leading to massive vacancy and ghost-town phenomenon. In contrast to the low demand, coastal areas tend to develop large amount of residences prior to industrial relocation, which may be directly related to local government finances. Some local governments have sold residential land to developers to general revenue to support the subsequent industrial development. Subject to the strong preference of ocean-view, residential housing developers tend to select coast-line land to construct new residential towns, which further reduces the access of marine resources for major industrial enterprises. This violates the original intent of developing industrial coastal towns and drastically limits the availability of marine resources. Lastly, we analyze the co-existence of over-exploiting residential areas and massive vacancies in reference to the demand and supply of land, as well as the demand of residential housing units with the choice criteria of enterprise employees.Keywords: coastal industry town, commuter traffic, employment-housing space, outer suburb industrial area
Procedia PDF Downloads 221110 The Application of Patterned Injuries in Reconstruction of Motorcycle Accidents
Authors: Chun-Liang Wu, Kai-Ping Shaw, Cheng-Ping Yu, Wu-Chien Chien, Hsiao-Ting Chen, Shao-Huang Wu
Abstract:
Objective: This study analyzed three criminal judicial cases. We applied the patterned injuries of the rider to demonstrate the facts of each accident, reconstruct the scenes, and pursue the truth. Methods: Case analysis, a method that collects evidence and reasons the results in judicial procedures, then the importance of the pattern of injury as evidence will be compared and evaluated. The patterned injuries analysis method is to compare the collision situation between an object and human body injuries to determine whether the characteristics can reproduce the unique pattern of injury. Result: Case 1: Two motorcycles, A and B, head-on collided; rider A dead, and rider B was accused. During the prosecutor’s investigation, the defendant learned that rider A had an 80 mm open wound on his neck. During the court trial, the defendant requested copies of the case file and found out that rider A had a large contusion on his chest wall, and the cause of death was traumatic hemothorax and abdominal wall contusion. The defendant compared all the evidence at the scene and determined that the injury was obviously not caused by the collision of the body or the motorcycle of rider B but that rider was out of control and injured himself when he crossed the double yellow line. In this case, the defendant was innocent in the High Court judgment in April 2022. Case 2: Motorcycles C and D head-on crashed, and rider C died of massive abdominal bleeding. The prosecutor decided that rider C was driving under the influence (DUI), but rider D was negligent and sued rider D. The defendant requested the copies’ file and found the special phenomenon that the front wheel of motorcycle C was turned left. The defendant’s injuries were a left facial bone fracture, a left femur fracture, and other injuries on the left side. The injuries were of human-vehicle separation and human-vehicle collision, which proved that rider C suddenly turned left when the two motorcycles approached, knocked down motorcycle D, and the defendant flew forward. Case 3: Motorcycle E and F’s rear end collided, the front rider E was sentenced to 3 months, and the rear rider F sued rider E for more than 7 million N.T. The defendant found in the copies’ file that the injury of rider F was the left tibial platform fracture, etc., and then proved that rider F made the collision with his left knee, causing motorcycle E to fall out of control. This evidence was accepted by the court and is still on trial. Conclusion: The application of patterned injuries in the reconstruction of a motorcycle accident could discover the truth and provide the basis for judicial justice. The cases and methods could be the reference for the policy of preventing traffic accident casualties.Keywords: judicial evidence, patterned injuries analysis, accident reconstruction, fatal motorcycle injuries
Procedia PDF Downloads 85109 Next-Generation Disability Management: Diverse and Inclusive Strategies for All
Authors: Nidhi Malshe
Abstract:
Background: Currently, there are approximately 1.3 billion individuals worldwide living with significant disabilities, which accounts for 16% of the global population—about 1 in 6 people. As the global population continues to grow, so does the number of people experiencing disabilities. Traffic accidents alone contribute to millions of injuries and disabilities each year, particularly among young people. Additionally, as life expectancy rises, more individuals are likely to experience disabilities in their later years. 27.0% of Canadians aged 15 and over, or 8 million people, had at least one disability in 2022. This represents an increase of 4.7 percentage points from 2017. A person with a disability earns 21.4% less on average as compared to a person without a disability. Using innovative and inclusive methods for accommodations, disability management, and employment, we can progress towards inclusive workplaces and potential income parity for this equity-seeking population. Objective: This study embraces innovative and inclusive approaches to disability management, thereby unlocking the advantages associated with a) fostering equal opportunities for all individuals, b) facilitating streamlined accommodations and making it easier for companies to accommodate people with disabilities, c) harnessing diverse perspectives to drive innovation and enhance overall productivity. Methodology: Literature review, assessments of specific needs and requirements in the workplace. a) Encourage the ability to think out of the box for potential workplace accommodations based on the specific needs of individuals. e.g., propose prolonged integration post disability. b) Perform a cost-benefit analysis of early interventions of return to work vs. duration on disability. c) Expand the scope of vocational assessment/retraining – e.g., retraining a person with permanent physical impairment to become a video game coder. d) Leverage the use of technology while planning to return to work e.g., speech-to-text software for persons with voice impairments. Hypothesized Results: Prolonged progression of return to work increases the potential for sustainable and productive employment. Co-developing a person-centric accommodation plan based on reported functional abilities and applying pioneering methods for extending accommodations to prevent secondary disabilities. Facilitate a sense of belonging by providing employees with benefits and initiatives that honor their unique contributions. Engage individuals with disabilities as active members of the planning committee to ensure the development of innovative and inclusive accommodations that address the needs of all. Conclusion: The global pandemic underscored the need for creativity in our daily routine. It is imperative to integrate the lessons learned from the pandemic, enhance them within employment, and return to work processes. These learnings can also be used to develop creative, distinct methods to ensure equal opportunities for everyone.Keywords: disbaility management, diversity, inclusion, innovation
Procedia PDF Downloads 16108 An Assessment of Suitable Alternative Public Transport System in Mid-Sized City of India
Authors: Sanjeev Sinha, Samir Saurav
Abstract:
The rapid growth of urban areas in India has led to transportation challenges like traffic congestion and an increase in accidents. Despite efforts by state governments and local administrations to improve urban transport, the surge in private vehicles has worsened the situation. Patna, located in Bihar State, is an example of the trend of increasing reliance on private motor vehicles, resulting in vehicular congestion and emissions. The existing transportation infrastructure is inadequate to meet future travel demands, and there has been a notable increase in the share of private vehicles in the city. Additionally, there has been a surge in economic activities in the region, which has increased the demand for improved travel convenience and connectivity. To address these challenges, a study was conducted to assess the most suitable transit mode for the proposed transit corridor outlined in the Comprehensive Mobility Plan (CMP) for Patna. The study covered four stages: developing screening criteria, evaluating parameters for various alternatives, qualitative and quantitative evaluations of alternatives, and implementation options for the most viable alternative. The study suggests that a mass transit system such as a metro rail is necessary to enhance Patna's urban public transport system. The New Metro Policy 2017 outlines specific prerequisites for submitting a Metro Rail Project Proposal to the Ministry of Housing and Urban Affairs (MoHUA), including the preparation of a CMP, the formation of an Urban Metropolitan Transport Authority (UMTA), the creation of an Alternative Analysis Report, the development of a Detailed Project Report, a Multi-Modal Integration Plan, and a Transit-Oriented Development (TOD) Plan. In 2018, the Comprehensive Mobility Plan for Patna was prepared, setting the stage for the subsequent steps in the metro rail project proposal. The results indicated that from the screening and analysis of qualitative parameters for different alternative modes in Patna, it is inferred that the Metro Rail and Monorail score 82.25 and 70.50, respectively, on a scale of 100. Based on the initial analysis and alternative evaluation in the form of quantitative analysis, the Metro Rail System significantly outperformed the Monorail system. The Metro Rail System has a positive Economic Net Present Value (ENPV) at a 14% internal rate of return, while the Monorail has a negative value. In conclusion, the study recommends choosing metro rail over monorail for the proposed transit corridor in Patna. However, the lack of broad-based technical expertise may result in implementation delays and increased costs for monorail.Keywords: comprehensive mobility plan, alternative analysis, mobility corridors, mass transit system
Procedia PDF Downloads 120107 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting
Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva
Abstract:
The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.Keywords: dipole antenna, double-band, high efficiency, rectenna
Procedia PDF Downloads 124106 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139105 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147