Search results for: noise mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2254

Search results for: noise mapping

274 Process Improvement and Redesign of the Immuno Histology (IHC) Lab at MSKCC: A Lean and Ergonomic Study

Authors: Samantha Meyerholz

Abstract:

MSKCC offers patients cutting edge cancer care with the highest quality standards. However, many patients and industry members do not realize that the operations of the Immunology Histology Lab (IHC) are the backbone for carrying out this mission. The IHC lab manufactures blocks and slides containing critical tissue samples that will be read by a Pathologist to diagnose and dictate a patient’s treatment course. The lab processes 200 requests daily, leading to the generation of approximately 2,000 slides and 1,100 blocks each day. Lab material is transported through labeling, cutting, staining and sorting manufacturing stations, while being managed by multiple techs throughout the space. The quality of the stain as well as wait times associated with processing requests, is directly associated with patients receiving rapid treatments and having a wider range of care options. This project aims to improve slide request turnaround time for rush and non-rush cases, while increasing the quality of each request filled (no missing slides or poorly stained items). Rush cases are to be filled in less than 24 hours, while standard cases are allotted a 48 hour time period. Reducing turnaround times enable patients to communicate sooner with their clinical team regarding their diagnosis, ultimately leading faster treatments and potentially better outcomes. Additional project goals included streamlining tech and material workflow, while reducing waste and increasing efficiency. This project followed a DMAIC structure with emphasis on lean and ergonomic principles that could be integrated into an evolving lab culture. Load times and batching processes were analyzed using process mapping, FMEA analysis, waste analysis, engineering observation, 5S and spaghetti diagramming. Reduction of lab technician movement as well as their body position at each workstation was of top concern to pathology leadership. With new equipment being brought into the lab to carry out workflow improvements, screen and tool placement was discussed with the techs in focus groups, to reduce variation and increase comfort throughout the workspace. 5S analysis was completed in two phases in the IHC lab, helping to drive solutions that reduced rework and tech motion. The IHC lab plans to continue utilizing these techniques to further reduce the time gap between tissue analysis and cancer care.

Keywords: engineering, ergonomics, healthcare, lean

Procedia PDF Downloads 222
273 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 82
272 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 247
271 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India

Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia

Abstract:

Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.

Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification

Procedia PDF Downloads 216
270 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 298
269 Development of a Multi-Variate Model for Matching Plant Nitrogen Requirements with Supply for Reducing Losses in Dairy Systems

Authors: Iris Vogeler, Rogerio Cichota, Armin Werner

Abstract:

Dairy farms are under pressure to increase productivity while reducing environmental impacts. Effective fertiliser management practices are critical to achieve this. Determination of optimum nitrogen (N) fertilisation rates which maximise pasture growth and minimise N losses is challenging due to variability in plant requirements and likely near-future supply of N by the soil. Remote sensing can be used for mapping N nutrition status of plants and to rapidly assess the spatial variability within a field. An algorithm is, however, lacking which relates the N status of the plants to the expected yield response to additions of N. The aim of this simulation study was to develop a multi-variate model for determining N fertilisation rate for a target percentage of the maximum achievable yield based on the pasture N concentration (ii) use of an algorithm for guiding fertilisation rates, and (iii) evaluation of the model regarding pasture yield and N losses, including N leaching, denitrification and volatilisation. A simulation study was carried out using the Agricultural Production Systems Simulator (APSIM). The simulations were done for an irrigated ryegrass pasture in the Canterbury region of New Zealand. A multi-variate model was developed and used to determine monthly required N fertilisation rates based on pasture N content prior to fertilisation and targets of 50, 75, 90 and 100% of the potential monthly yield. These monthly optimised fertilisation rules were evaluated by running APSIM for a ten-year period to provide yield and N loss estimates from both nonurine and urine affected areas. Comparison with typical fertilisation rates of 150 and 400 kg N/ha/year was also done. Assessment of pasture yield and leaching from fertiliser and urine patches indicated a large reduction in N losses when N fertilisation rates were controlled by the multi-variate model. However, the reduction in leaching losses was much smaller when taking into account the effects of urine patches. The proposed approach based on biophysical modelling to develop a multi-variate model for determining optimum N fertilisation rates dependent on pasture N content is very promising. Further analysis, under different environmental conditions and validation is required before the approach can be used to help adjust fertiliser management practices to temporal and spatial N demand based on the nitrogen status of the pasture.

Keywords: APSIM modelling, optimum N fertilization rate, pasture N content, ryegrass pasture, three dimensional surface response function.

Procedia PDF Downloads 128
268 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 140
267 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 184
266 Nude Cosmetic Water-Rich Compositions for Skin Care and Consumer Emotions

Authors: Emmanuelle Merat, Arnaud Aubert, Sophie Cambos, Francis Vial, Patrick Beau

Abstract:

Basically, consumers are sensitive to many stimuli when applying a cream: brand, packaging and indeed formulation compositions. Many studies demonstrated the influence of some stimuli such as brand, packaging, formula color and odor (e.g. in make-up applications). Those parameters influence perceived quality of the product. The objective of this work is to further investigate the relationship between nude skincare basic compositions with different textures and consumer experience. A tentative final step will be to connect the consumer feelings with key ingredients in the compositions. A new approach was developed to better understand touch-related subjective experience in consumers based on a combination of methods: sensory analysis with ten experts, preference mapping on one hundred female consumers and emotional assessments on thirty consumers (verbal and non-verbal through prosody and gesture monitoring). Finally, a methodology based on ‘sensorial trip’ (after olfactory, haptic and musical stimuli) has been experimented on the most interesting textures with 10 consumers. The results showed more or less impact depending on compositions and also on key ingredients. Three types of formulation particularly attracted the consumer: an aqueous gel, an oil-in-water emulsion, and a patented gel-in-oil formulation type. Regarding these three formulas, the preferences were both revealed through sensory and emotion tests. One was recognized as the most innovative in consumer sensory test whereas the two other formulas were discriminated in emotions evaluation. The positive emotions were highlighted especially in prosody criteria. The non-verbal analysis, which corresponds to the physical parameters of the voice, showed high pitch and amplitude values; linked to positive emotions. Verbatim, verbal content of responses (i.e., ideas, concepts, mental images), confirmed the first conclusion. On the formulas selected for their positive emotions generation, the ‘sensorial trip’ provided complementary information to characterize each emotional profile. In the second step, dedicated to better understand ingredients power, two types of ingredients demonstrated an obvious input on consumer preference: rheology modifiers and emollients. As a conclusion, nude cosmetic compositions with well-chosen textures and ingredients can positively stimulate consumer emotions contributing to capture their preference. For a complete achievement of the study, a global approach (Asia, America territories...) should be developed.

Keywords: sensory, emotion, cosmetic formulations, ingredients' influence

Procedia PDF Downloads 178
265 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 103
264 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings

Authors: Stefan Peters, Phillip Roetman

Abstract:

The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.

Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor

Procedia PDF Downloads 111
263 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 139
262 CertifHy: Developing a European Framework for the Generation of Guarantees of Origin for Green Hydrogen

Authors: Frederic Barth, Wouter Vanhoudt, Marc Londo, Jaap C. Jansen, Karine Veum, Javier Castro, Klaus Nürnberger, Matthias Altmann

Abstract:

Hydrogen is expected to play a key role in the transition towards a low-carbon economy, especially within the transport sector, the energy sector and the (petro)chemical industry sector. However, the production and use of hydrogen only make sense if the production and transportation are carried out with minimal impact on natural resources, and if greenhouse gas emissions are reduced in comparison to conventional hydrogen or conventional fuels. The CertifHy project, supported by a wide range of key European industry leaders (gas companies, chemical industry, energy utilities, green hydrogen technology developers and automobile manufacturers, as well as other leading industrial players) therefore aims to: 1. Define a widely acceptable definition of green hydrogen. 2. Determine how a robust Guarantee of Origin (GoO) scheme for green hydrogen should be designed and implemented throughout the EU. It is divided into the following work packages (WPs). 1. Generic market outlook for green hydrogen: Evidence of existing industrial markets and the potential development of new energy related markets for green hydrogen in the EU, overview of the segments and their future trends, drivers and market outlook (WP1). 2. Definition of “green” hydrogen: step-by-step consultation approach leading to a consensus on the definition of green hydrogen within the EU (WP2). 3. Review of existing platforms and interactions between existing GoO and green hydrogen: Lessons learnt and mapping of interactions (WP3). 4. Definition of a framework of guarantees of origin for “green” hydrogen: Technical specifications, rules and obligations for the GoO, impact analysis (WP4). 5. Roadmap for the implementation of an EU-wide GoO scheme for green hydrogen: the project implementation plan will be presented to the FCH JU and the European Commission as the key outcome of the project and shared with stakeholders before finalisation (WP5 and 6). Definition of Green Hydrogen: CertifHy Green hydrogen is hydrogen from renewable sources that is also CertifHy Low-GHG-emissions hydrogen. Hydrogen from renewable sources is hydrogen belonging to the share of production equal to the share of renewable energy sources (as defined in the EU RES directive) in energy consumption for hydrogen production, excluding ancillary functions. CertifHy Low-GHG hydrogen is hydrogen with emissions lower than the defined CertifHy Low-GHG-emissions threshold, i.e. 36.4 gCO2eq/MJ, produced in a plant where the average emissions intensity of the non-CertifHy Low-GHG hydrogen production (based on an LCA approach), since sign-up or in the past 12 months, does not exceed the emissions intensity of the benchmark process (SMR of natural gas), i.e. 91.0 gCO2eq/MJ.

Keywords: green hydrogen, cross-cutting, guarantee of origin, certificate, DG energy, bankability

Procedia PDF Downloads 491
261 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 74
260 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 528
259 Conflict around the Brownfield Reconversion of the Canadian Forces Base Rockcliffe in Ottawa: A Clash of Ambitions and Visions in Canadian Urban Sustainability

Authors: Kenza Benali

Abstract:

Over the past decade, a number of remarkable projects in urban brownfield reconversion emerged across Canada, including the reconversion of former military bases owned by the Canada Lands Company (CLC) into sustainable communities. However, unlike other developments, the regeneration project of the former Canadian Forces Base Rockcliffe in Ottawa – which was announced as one of the most ambitious Smart growth projects in Canada – faced serious obstacles in terms of social acceptance by the local community, particularly urban minorities composed of Francophones, Indigenous and vulnerable groups who live near or on the Base. This turn of events led to the project being postponed and even reconsidered. Through an analysis of its press coverage, this research aims to understand the causes of this urban conflict which lasted for nearly ten years. The findings reveal that the conflict is not limited to the “standard” issues common to most conflicts related to urban mega-projects in the world – e.g., proximity issues (threads to the quality of the surrounding neighbourhoods; noise, traffic, pollution, New-build gentrification) often associated with NIMBY phenomena. In this case, the local actors questioned the purpose of the project (for whom and for what types of uses is it conceived?), its local implementation (to what extent are the local history and existing environment taken into account?), and the degree of implication of the local population in the decision-making process (with whom is the project built?). Moreover, the interests of the local actors have “jumped scales” and transcend the micro-territorial level of their daily life to take on a national and even international dimension. They defined an alternative view of how this project, considered strategic by his location in the nation’s capital, should be a reference as well as an international showcase of Canadian ambition and achievement in terms of urban sustainability. This vision promoted, actually, a territorial and national identity approach - in which some cultural values are highly significant (respect of social justice, inclusivity, ethnical diversity, cultural heritage, etc.)- as a counterweight to planners’ vision which is criticized as a normative/ universalist logic that ignore the territorial peculiarities.

Keywords: smart growth, brownfield reconversion, sustainable neighborhoods, Canada Lands Company, Canadian Forces Base Rockcliffe, urban conflicts

Procedia PDF Downloads 381
258 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 88
257 Food Strategies in the Mediterranean Basin, Possible for Food Safety and Security

Authors: Lorenza Sganzetta, Nunzia Borrelli

Abstract:

The research intends to reflect on the current mapping of the Food Strategies, on the reasons why in the planning objectives panorama, such sustainability priorities are located in those geographic areas and on the evolutions of these priorities of the Mediterranean planning dispositions. The whirling population growth that is affecting global cities is causing an enormous challenge to conventional resource-intensive food production and supply and the urgent need to face food safety, food security and sustainability concerns. Urban or Territorial Food Strategies can provide an interesting path for the development of this new agenda within the imperative principle of sustainability. In the specific, it is relevant to explore what ‘sustainability’ means within these policies. Most of these plans include actions related to four main components and interpretations of sustainability that are food security and safety, food equity, environmental sustainability itself and cultural identity and, at the designing phase, they differ slightly from each other according to the degree of approximation to one of these dimensions. Moving from these assumptions, the article would analyze some practices and policies representatives of different Food Strategies of the world and focus on the Mediterranean ones, on the problems and negative externalities from which they start, on the first interventions that are implementing and on their main objectives. We will mainly use qualitative data from primary and secondary collections. So far, an essential observation could have been made about the relationship between these sustainability dimensions and geography. In statistical terms, the US and Canadian policies tended to devote a large research space to health issues and access to food; those northern European showed a special attention to the environmental issues and the shortening of the chain; and finally the policies that, even in limited numbers, were being developed in the Mediterranean basin, were characterized by a strong territorial and cultural imprint and their major aim was to preserve local production and the contact between the productive land and the end consumer. Recently, though, Mediterranean food planning strategies are focusing more on health related and food accessibility issues and analyzing our diets not just as a matter of culture and territorial branding but as tools for reducing public health costs and accessibility to fresh food for everyone. The article would reflect then on how Food Safety, Food Security and Health are entering the new agenda of the Mediterranean Food Strategies. The research hypothesis suggests that the economic crisis that in the last years invested both producers and consumers had a significant impact on the nutrition habits and on the redefinition of food poverty, even in the fatherland of the healthy Mediterranean diet. This trend and other variables influenced the orientation and the objectives of the food strategies.

Keywords: food security, food strategy, health, sustainability

Procedia PDF Downloads 223
256 Assessment of Urban Infrastructure and Health Using Principal Component Analysis and Geographic Information System: A Case of Ahmedabad, India

Authors: Anusha Vaddiraj Pallapu

Abstract:

Across the globe, there is a steady increase in people residing in urban areas. Due to this increase in urban population, urban health is affecting. The major issues identified like overcrowding, air pollution, unhealthy diet, inadequate infrastructure, poor solid waste management systems and insufficient access to health facilities, these issues are gradually clearly observed in health statistics of diseases and deaths rapidly increase in urban areas. Therefore, the present study aims to assess the health statistics and infrastructure services at urban areas to know the cause and effect between Infrastructure, its management and diseases (water borne). Most of the Indian cities have the municipal boundaries, which authorized by their respective municipal corporations and development authorities. Generally, cities have various zones under which municipal wards exist. The paper focuses on the city Ahmedabad, at Gujarat state. Ahmedabad Municipal Corporation (AMC) is divided into six zones namely Central zone, West zone, New-West zone, East zone, North zone, and South zone. Each zone includes various wards within it. Incidence of diseases in Ahmadabad which are linked to infrastructure was identified such as water-borne diseases. Later on, the occurrence of water-borne diseases at urban area was examined at each zone level. The study methodology follows four steps i.e. 1) Pre-Field literature study: Study on Sewerage system in urban areas and its best practices and public health status globally and Indian scenario; 2) Field study: Data collection and interviews of stakeholders regarding heal status and issues at each zone and ward level; 3) Post field: Data analysis with qualitative description of each ward of zones, followed by correlation coefficient analysis between sewerage coverage, diseases and density of each ward using geographic information system mapping (GIS); 4) Identification of reasons: Affected health on each of zone and wards followed by correlation analysis on each reason. The results reveal that the health conditions in Ahmedabad municipal zones or boundaries are effected due to the slums created by the migrated people from various rural and urban areas. It is also observed that due to increase in population water supply and sewerage management is affecting. The overall effect on infrastructure is creating the health diseases which detailed in the paper using geographical information system in Indian city.

Keywords: infrastructure, municipal wards, GIS, water supply, sewerage, medical facilities, water borne diseases

Procedia PDF Downloads 210
255 Assessment of Microclimate in Abu Dhabi Neighborhoods: On the Utilization of Native Landscape in Enhancing Thermal Comfort

Authors: Maryam Al Mheiri, Khaled Al Awadi

Abstract:

Urban population is continuously increasing worldwide and the speed at which cities urbanize creates major challenges, particularly in terms of creating sustainable urban environments. Rapid urbanization often leads to negative environmental impacts and changes in the urban microclimates. Moreover, when rapid urbanization is paired with limited landscape elements, the effects on human health due to the increased pollution, and thermal comfort due to Urban Heat Island effects are increased. Urban Heat Island (UHI) describes the increase of urban temperatures in urban areas in comparison to its rural surroundings, and, as we discuss in this paper, it impacts on pedestrian comfort, reducing the number of walking trips and public space use. It is thus very necessary to investigate the quality of outdoor built environments in order to improve the quality of life incites. The main objective of this paper is to address the morphology of Emirati neighborhoods, setting a quantitative baseline by which to assess and compare spatial characteristics and microclimate performance of existing typologies in Abu Dhabi. This morphological mapping and analysis will help to understand the built landscape of Emirati neighborhoods in this city, whose form has changed and evolved across different periods. This will eventually help to model the use of different design strategies, such as landscaping, to mitigate UHI effects and enhance outdoor urban comfort. Further, the impact of different native plants types and native species in reducing UHI effects and enhancing outdoor urban comfort, allowing for the assessment of the impact of increasing landscaped areas in these neighborhoods. This study uses ENVI-met, an analytical, three-dimensional, high-resolution microclimate modeling software. This micro-scale urban climate model will be used to evaluate existing conditions and generate scenarios in different residential areas, with different vegetation surfaces and landscaping, and examine their impact on surface temperatures during summer and autumn. In parallel to these simulations, field measurement will be included to calibrate the Envi-met model. This research therefore takes an experimental approach, using simulation software, and a case study strategy for the evaluation of a sample of residential neighborhoods. A comparison of the results of these scenarios constitute a first step towards making recommendations about what constitutes sustainable landscapes for Abu Dhabi neighborhoods.

Keywords: landscape, microclimate, native plants, sustainable neighborhoods, thermal comfort, urban heat island

Procedia PDF Downloads 309
254 The Study of Intangible Assets at Various Firm States

Authors: Gulnara Galeeva, Yulia Kasperskaya

Abstract:

The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.

Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix

Procedia PDF Downloads 206
253 Cassava Plant Architecture: Insights from Genome-Wide Association Studies

Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi

Abstract:

Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.

Keywords: Manihot esculenta Crantz, plant architecture, DArtseq, SNP markers, genome-wide association study

Procedia PDF Downloads 68
252 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery

Authors: Bencherif Kada

Abstract:

In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, diversity, shrublands

Procedia PDF Downloads 119
251 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 321
250 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 173
249 Cumulative Pressure Hotspot Assessment in the Red Sea and Arabian Gulf

Authors: Schröde C., Rodriguez D., Sánchez A., Abdul Malak, Churchill J., Boksmati T., Alharbi, Alsulmi H., Maghrabi S., Mowalad, Mutwalli R., Abualnaja Y.

Abstract:

Formulating a strategy for sustainable development of the Kingdom of Saudi Arabia’s coastal and marine environment is at the core of the “Marine and Coastal Protection Assessment Study for the Kingdom of Saudi Arabia Coastline (MCEP)”; that was set up in the context of the Vision 2030 by the Saudi Arabian government and aimed at providing a first comprehensive ‘Status Quo Assessment’ of the Kingdom’s marine environment to inform a sustainable development strategy and serve as a baseline assessment for future monitoring activities. This baseline assessment relied on scientific evidence of the drivers, pressures and their impact on the environments of the Red Sea and Arabian Gulf. A key element of the assessment was the cumulative pressure hotspot analysis developed for both national waters of the Kingdom following the principles of the Driver-Pressure-State-Impact-Response (DPSIR) framework and using the cumulative pressure and impact assessment methodology. The ultimate goals of the analysis were to map and assess the main hotspots of environmental pressures, and identify priority areas for further field surveillance and for urgent management actions. The study identified maritime transport, fisheries, aquaculture, oil, gas, energy, coastal industry, coastal and maritime tourism, and urban development as the main drivers of pollution in the Saudi Arabian marine waters. For each of these drivers, pressure indicators were defined to spatially assess the potential influence of the drivers on the coastal and marine environment. A list of hotspots of 90 locations could be identified based on the assessment. Spatially grouped the list could be reduced to come up with of 10 hotspot areas, two in the Arabian Gulf, 8 in the Red Sea. The hotspot mapping revealed clear spatial patterns of drivers, pressures and hotspots within the marine environment of waters under KSA’s maritime jurisdiction in the Red Sea and Arabian Gulf. The cascading assessment approach based on the DPSIR framework ensured that the root causes of the hotspot patterns, i.e. the human activities and other drivers, can be identified. The adapted CPIA methodology allowed for the combination of the available data to spatially assess the cumulative pressure in a consistent manner, and to identify the most critical hotspots by determining the overlap of cumulative pressure with areas of sensitive biodiversity. Further improvements are expected by enhancing the data sources of drivers and pressure indicators, fine-tuning the decay factors and distances of the pressure indicators, as well as including trans-boundary pressures across the regional seas.

Keywords: Arabian Gulf, DPSIR, hotspot, red sea

Procedia PDF Downloads 138
248 Environmental Related Mortality Rates through Artificial Intelligence Tools

Authors: Stamatis Zoras, Vasilis Evagelopoulos, Theodoros Staurakas

Abstract:

The association between elevated air pollution levels and extreme climate conditions (temperature, particulate matter, ozone levels, etc.) and mental consequences has been, recently, the focus of significant number of studies. It varies depending on the time of the year it occurs either during the hot period or cold periods but, specifically, when extreme air pollution and weather events are observed, e.g. air pollution episodes and persistent heatwaves. It also varies spatially due to different effects of air quality and climate extremes to human health when considering metropolitan or rural areas. An air pollutant concentration and a climate extreme are taking a different form of impact if the focus area is countryside or in the urban environment. In the built environment the climate extreme effects are driven through the formed microclimate which must be studied more efficiently. Variables such as biological, age groups etc may be implicated by different environmental factors such as increased air pollution/noise levels and overheating of buildings in comparison to rural areas. Gridded air quality and climate variables derived from the land surface observations network of West Macedonia in Greece will be analysed against mortality data in a spatial format in the region of West Macedonia. Artificial intelligence (AI) tools will be used for data correction and prediction of health deterioration with climatic conditions and air pollution at local scale. This would reveal the built environment implications against the countryside. The air pollution and climatic data have been collected from meteorological stations and span the period from 2000 to 2009. These will be projected against the mortality rates data in daily, monthly, seasonal and annual grids. The grids will be operated as AI-based warning models for decision makers in order to map the health conditions in rural and urban areas to ensure improved awareness of the healthcare system by taken into account the predicted changing climate conditions. Gridded data of climate conditions, air quality levels against mortality rates will be presented by AI-analysed gridded indicators of the implicated variables. An Al-based gridded warning platform at local scales is then developed for future system awareness platform for regional level.

Keywords: air quality, artificial inteligence, climatic conditions, mortality

Procedia PDF Downloads 112
247 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 102
246 Applications of Multi-Path Futures Analyses for Homeland Security Assessments

Authors: John Hardy

Abstract:

A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.

Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning

Procedia PDF Downloads 137
245 Ecological Crisis: A Buddhist Approach

Authors: Jaharlal Debbarma

Abstract:

The ecological crisis has become a threat to earth’s well-being. Man’s ambitious desire of wealth, pleasure, fame, longevity and happiness has extracted natural resources so vastly that it is unable to sustain a healthy life. Man’s greed for wealth and power has caused the setting up of vast factories which further created the problem of air, water and noise pollution, which have adversely affected both fauna and flora.It is no secret that man uses his inherent powers of reason, intelligence and creativity to change his environment for his advantage. But man is not aware that the moral force he himself creates brings about corresponding changes in his environment to his weal or woe whether he likes it or not. As we are facing the global warming and the nature’s gift such as air and water has been so drastically polluted with disastrous consequences that man seek for a ways and means to overcome all this pollution problem as his health and life sustainability has been threaten and that is where man try to question about the moral ethics and value.It is where Buddhist philosophy has been emphasized deeply which gives us hope for overcoming this entire problem as Buddha himself emphasized in eradicating human suffering and Buddhism is the strongest form of humanism we have. It helps us to learn to live with responsibility, compassion, and loving kindness.It teaches us to be mindful in our action and thought as the environment unites every human being. If we fail to save it we will perish. If we can rise to meet the need to all which ecology binds us - humans, other species, other everything will survive together.My paper will look into the theory of Dependent Origination (Pratītyasamutpāda), Buddhist understanding of suffering (collective suffering), and Non-violence (Ahimsa) and an effort will be made to provide a new vision to Buddhist ecological perspective. The above Buddhist philosophy will be applied to ethical values and belief systems of modern society. The challenge will be substantially to transform the modern individualistic and consumeristic values. The stress will be made on the interconnectedness of the nature and the relation between human and planetary sustainability. In a way environmental crisis will be referred to “spiritual crisis” as A. Gore (1992) has pointed out. The paper will also give important to global consciousness, as well as to self-actualization and self-fulfillment. In the words of Melvin McLeod “Only when we combine environmentalism with spiritual practice, will we find the tools to make the profound personal transformations needed to address the planetary crisis?”

Keywords: dependent arising, collective ecological suffering, remediation, Buddhist approach

Procedia PDF Downloads 266