Search results for: spatial batch normalization with dropout
1350 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.Keywords: Kano model, mass customization, new product development, serious game
Procedia PDF Downloads 1371349 Algae for Wastewater Treatment and CO₂ Sequestration along with Recovery of Bio-Oil and Value Added Products
Authors: P. Kiran Kumar, S. Vijaya Krishna, Kavita Verma1, V. Himabindu
Abstract:
Concern about global warming and energy security has led to increased biomass utilization as an alternative feedstock to fossil fuels. Biomass is a promising feedstock since it is abundant and cheap and can be transformed into fuels and chemical products. Microalgae biofuels are likely to have a much lower impact on the environment. Microalgae cultivation using sewage with industrial flue gases is a promising concept for integrated biodiesel production, CO₂ sequestration, and nutrients recovery. Autotrophic, Mixotrophic, and Heterotrophic are the three modes of cultivation for microalgae biomass. Several mechanical and chemical processes are available for the extraction of lipids/oily components from microalgae biomass. In organic solvent extraction methods, a prior drying of biomass and recovery of the solvent is required, which are energy-intensive. Thus, the hydrothermal process overcomes the drawbacks of conventional solvent extraction methods. In the hydrothermal process, the biomass is converted into oily components by processing in a hot, pressurized water environment. In this process, in addition to the lipid fraction of microalgae, other value-added products such as proteins, carbohydrates, and nutrients can also be recovered. In the present study was (Scenedesmus quadricauda) was isolated and cultivated in autotrophic, heterotrophic, and mixotrophically using sewage wastewater and industrial flue gas in batch and continuous mode. The harvested algae biomass from S. quadricauda was used for the recovery of lipids and bio-oil. The lipids were extracted from the algal biomass using sonication as a cell disruption method followed by solvent (Hexane) extraction, and the lipid yield obtained was 8.3 wt% with Palmitic acid, Oleic acid, and Octadeonoic acid as fatty acids. The hydrothermal process was also carried out for extraction of bio-oil, and the yield obtained was 18wt%. The bio-oil compounds such as nitrogenous compounds, organic acids, and esters, phenolics, hydrocarbons, and alkanes were obtained by the hydrothermal process of algal biomass. Nutrients such as NO₃⁻ (68%) and PO₄⁻ (15%) were also recovered along with bio-oil in the hydrothermal process.Keywords: flue gas, hydrothermal process, microalgae, sewage wastewater, sonication
Procedia PDF Downloads 1411348 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems
Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani
Abstract:
The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems
Procedia PDF Downloads 1311347 Modeling and Stability Analysis of Viral Propagation in Wireless Mesh Networking
Authors: Haowei Chen, Kaiqi Xiong
Abstract:
This paper aims to answer how malware will propagate in Wireless Mesh Networks (WMNs) and how communication radius and distributed density of nodes affects the process of spreading. The above analysis is essential for devising network-wide strategies to counter malware. We answer these questions by developing an improved dynamical system that models malware propagation in the area where nodes were uniformly distributed. The proposed model captures both the spatial and temporal dynamics regarding the malware spreading process. Equilibrium and stability are also discussed based on the threshold of the system. If the threshold is less than one, the infected nodes disappear, and if the threshold is greater than one, the infected nodes asymptotically stabilize at the endemic equilibrium. Numerical simulations are investigated about communication radius and distributed density of nodes in WMNs, which allows us to draw various insights that can be used to guide security defense.Keywords: Bluetooth security, malware propagation, wireless mesh networks, stability analysis
Procedia PDF Downloads 1031346 Efficient Layout-Aware Pretraining for Multimodal Form Understanding
Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose
Abstract:
Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention
Procedia PDF Downloads 1531345 Cellular Architecture of Future Wireless Communication Networks
Authors: Mohammad Yahaghifar
Abstract:
Nowadays Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications. Evolving future communication network generation cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user coverage in hot-spots and crowded areas with lower latency,energy consumption and cost per information transfer. In this paper we propose a potential cellular architecture that separates indoor and outdoor scenarios and discuss various promising technologies for future wireless communication systemssystems, such as massive MIMO, energy-efficient communications,cognitive radio networks, and visible light communications and we disscuse about 5G that is next generation of wireless networks.Keywords: future challenges in networks, cellur architecture, visible light communication, 5G wireless technologies, spatial modulation, massiva mimo, cognitive radio network, green communications
Procedia PDF Downloads 4901344 Digital Watermarking Using Fractional Transform and (k,n) Halftone Visual Cryptography (HVC)
Authors: R. Rama Kishore, Sunesh Malik
Abstract:
Development in the usage of internet for different purposes in recent times creates great threat for the copy right protection of the digital images. Digital watermarking is the best way to rescue from the said problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field and categorized like spatial and transform domain, blind and non-blind methods, visible and non visible techniques etc. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (k.n) shares of halftone visual cryptography (HVC) instead of (2, 2) share cryptography. (k,n) shares visual cryptography improves the security of the watermark. As halftone is a method of reprographic, it helps in improving the visual quality of watermark image. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method.Keywords: digital watermarking, fractional transform, halftone, visual cryptography
Procedia PDF Downloads 3581343 Universe at Zero Second and the Creation Process of the First Particle from the Absolute Void
Authors: Shivan Sirdy
Abstract:
In this study, we discuss the properties of absolute void space or the universe at zero seconds, and how these properties play a vital role in creating a mechanism in which the very first particle gets created simultaneously everywhere. We find the limit in which when the absolute void volume reaches will lead to the collapse that leads to the creation of the first particle. This discussion is made following the elementary dimensions theory study that was peer-reviewed at the end of 2020; everything in the universe is made from four elementary dimensions, these dimensions are the three spatial dimensions (X, Y, and Z) and the Void resistance as the factor of change among the four. Time itself was not considered as the fourth dimension. Rather time corresponds to a factor of change, and during the research, it was found out that the Void resistance is the factor of change in the absolute Void space, where time is a hypothetical concept that represents changes during certain events compared to a constant change rate event. Therefore, time does exist, but as a factor of change as the Void resistance: Time= factor of change= Void resistance.Keywords: elementary dimensions, absolute void, time alternative, early universe, universe at zero second, Void resistant, Hydrogen atom, Hadron field, Lepton field
Procedia PDF Downloads 2041342 Different Cognitive Processes in Selecting Spatial Demonstratives: A Cross-Linguistic Experimental Survey
Authors: Yusuke Sugaya
Abstract:
Our research conducts a cross-linguistic experimental investigation into the cognitive processes involved in distance judgment necessary for selecting demonstratives in deictic usage. Speakers may consider the addressee's judgment or apply certain criteria for distance judgment when they produce demonstratives. While it can be assumed that there are language and cultural differences, it remains unclear how these differences manifest across languages. This research conducted online experiments involving speakers of six languages—Japanese, Spanish, Irish, English, Italian, and French—in which a wide variety of drawings were presented on a screen, varying conditions from three perspectives: addressee, comparisons, and standard. The results of the experiments revealed various distinct features associated with demonstratives in each language, highlighting differences from a comparative standpoint. For one thing, there was an influence of a specific reference point (i.e., Standard) on the selection in Japanese and Spanish, whereas there was relatively an influence of competitors in English and Italian.Keywords: demonstratives, cross-linguistic experiment, distance judgment, social cognition
Procedia PDF Downloads 541341 Effect of Sulphur Concentration on Microbial Population and Performance of a Methane Biofilter
Authors: Sonya Barzgar, J. Patrick, A. Hettiaratchi
Abstract:
Methane (CH4) is reputed as the second largest contributor to greenhouse effect with a global warming potential (GWP) of 34 related to carbon dioxide (CO2) over the 100-year horizon, so there is a growing interest in reducing the emissions of this gas. Methane biofiltration (MBF) is a cost effective technology for reducing low volume point source emissions of methane. In this technique, microbial oxidation of methane is carried out by methane-oxidizing bacteria (methanotrophs) which use methane as carbon and energy source. MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting methane to carbon dioxide (CO₂) and water (H₂O). Even though the biofiltration technique has been shown to be an efficient, practical and viable technology, the design and operational parameters, as well as the relevant microbial processes have not been investigated in depth. In particular, limited research has been done on the effects of sulphur on methane bio-oxidation. Since bacteria require a variety of nutrients for growth, to improve the performance of methane biofiltration, it is important to establish the input quantities of nutrients to be provided to the biofilter to ensure that nutrients are available to sustain the process. The study described in this paper was conducted with the aim of determining the influence of sulphur on methane elimination in a biofilter. In this study, a set of experimental measurements has been carried out to explore how the conversion of elemental sulphur could affect methane oxidation in terms of methanotrophs growth and system pH. Batch experiments with different concentrations of sulphur were performed while keeping the other parameters i.e. moisture content, methane concentration, oxygen level and also compost at their optimum level. The study revealed the tolerable limit of sulphur without any interference to the methane oxidation as well as the particular sulphur concentration leading to the greatest methane elimination capacity. Due to the sulphur oxidation, pH varies in a transient way which affects the microbial growth behavior. All methanotrophs are incapable of growth at pH values below 5.0 and thus apparently are unable to oxidize methane. Herein, the certain pH for the optimal growth of methanotrophic bacteria is obtained. Finally, monitoring methane concentration over time in the presence of sulphur is also presented for laboratory scale biofilters.Keywords: global warming, methane biofiltration (MBF), methane oxidation, methanotrophs, pH, sulphur
Procedia PDF Downloads 2411340 The Study on the Platform Strategy of Taipei City Urban Regeneration Station
Authors: Chao Jen-Chih, Kuo-Wei Hsu
Abstract:
Many venues and spaces in cities gradually become old and decayed as time goes by and develops. Urban regeneration is the critical strategy to promote local development, but the method of spatial reconstruction which is emphasized in the issue of urban regeneration is questioned for bringing cultural, social and economic impacts on old city areas. The idea of “Urban Regeneration Station (URS)” is proposed for Taipei City Government to introduce the entry and disturbance of communities and related groups with the concept of creative city. This study explored how an URS promotes local development again through the strength of communities and the energy of local residence community, and it established the Platform Strategy for URS. The research results are as follows: URS through the promotion of government agencies, experts, scholars and the third sector, to the selection of different types of units stationed in business, through exhibitions, seminars, and other activities to explore local development issues, vetting each stationed execution efficiency units, and different units stationed by URS establish URS overall network platform strategy.Keywords: urban regeneration, platform strategy, creative city, Taipei city
Procedia PDF Downloads 4591339 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model
Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck
Abstract:
Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure
Procedia PDF Downloads 2521338 Environmental Quality in Urban Areas: Legal Aspect and Institutional Dimension: A Case Study of Algeria
Authors: Youcef Lakhdar Hamina
Abstract:
In order to tame the ecological damage specificity, it is imperative to assert the procedural and objective liability aspect, which leads us to analyse current trends based on the development of preventive civil liability based on the precautionary principle. Our research focuses on the instruments of the environment protection in urban areas based on two complementary aspects appearing contradictory and refer directly to the institutional dimensions: - The preventive aspect: considered as a main objective of the environmental policy which highlights the different legal mechanisms for the environment protection by highlighting the role of administration in its implementation (environmental planning, tax incentives, modes of participation of all actors, etc.). - The healing-repressive aspect: considered as an approach for the identification of ecological damage and the forms of reparation (spatial and temporal-responsibility) to the impossibility of predicting with rigor and precision, the appearance of ecological damage, which cannot be avoided.Keywords: environmental law, environmental taxes, environmental damage, eco responsibility, precautionary principle, environmental management
Procedia PDF Downloads 4181337 Numerical Solutions of an Option Pricing Rainfall Derivatives Model
Authors: Clarinda Vitorino Nhangumbe, Ercília Sousa
Abstract:
Weather derivatives are financial products used to cover non catastrophic weather events with a weather index as the underlying asset. The rainfall weather derivative pricing model is modeled based in the assumption that the rainfall dynamics follows Ornstein-Uhlenbeck process, and the partial differential equation approach is used to derive the convection-diffusion two dimensional time dependent partial differential equation, where the spatial variables are the rainfall index and rainfall depth. To compute the approximation solutions of the partial differential equation, the appropriate boundary conditions are suggested, and an explicit numerical method is proposed in order to deal efficiently with the different choices of the coefficients involved in the equation. Being an explicit numerical method, it will be conditionally stable, then the stability region of the numerical method and the order of convergence are discussed. The model is tested for real precipitation data.Keywords: finite differences method, ornstein-uhlenbeck process, partial differential equations approach, rainfall derivatives
Procedia PDF Downloads 1121336 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect
Authors: A. Kojah, A. Nacaroğlu
Abstract:
Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.Keywords: energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase
Procedia PDF Downloads 2251335 An Effective Noise Resistant Frequency Modulation Continuous-Wave Radar Vital Sign Signal Detection Method
Authors: Lu Yang, Meiyang Song, Xiang Yu, Wenhao Zhou, Chuntao Feng
Abstract:
To address the problem that the FM continuous-wave radar (FMCW) extracts human vital sign signals which are susceptible to noise interference and low reconstruction accuracy, a new detection scheme for the sign signals is proposed. Firstly, an improved complete ensemble empirical modal decomposition with adaptive noise (ICEEMDAN) algorithm is applied to decompose the radar-extracted thoracic signals to obtain several intrinsic modal functions (IMF) with different spatial scales, and then the IMF components are optimized by a BP neural network improved by immune genetic algorithm (IGA). The simulation results show that this scheme can effectively separate the noise and accurately extract the respiratory and heartbeat signals and improve the reconstruction accuracy and signal-to-noise ratio of the sign signals.Keywords: frequency modulated continuous wave radar, ICEEMDAN, BP neural network, vital signs signal
Procedia PDF Downloads 1691334 Techniques to Characterize Subpopulations among Hearing Impaired Patients and Its Impact for Hearing Aid Fitting
Authors: Vijaya K. Narne, Gerard Loquet, Tobias Piechowiak, Dorte Hammershoi, Jesper H. Schmidt
Abstract:
BEAR, which stands for better hearing rehabilitation is a large-scale project in Denmark designed and executed by three national universities, three hospitals, and the hearing aid industry with the aim to improve hearing aid fitting. A total of 1963 hearing impaired people were included and were segmented into subgroups based on hearing-loss, demographics, audiological and questionnaires data (i.e., the speech, spatial and qualities of hearing scale [SSQ-12] and the International Outcome Inventory for Hearing-Aids [IOI-HA]). With the aim to provide a better hearing-aid fit to individual patients, we applied modern machine learning techniques with traditional audiograms rule-based systems. Results show that age, speech discrimination scores, and audiogram configurations were evolved as important parameters in characterizing sub-population from the data-set. The attempt to characterize sub-population reveal a clearer picture about the individual hearing difficulties encountered and the benefits derived from more individualized hearing aids.Keywords: hearing loss, audiological data, machine learning, hearing aids
Procedia PDF Downloads 1561333 Assessment of Social Vulnerability of Urban Population to Floods – a Case Study of Mumbai
Authors: Sherly M. A., Varsha Vijaykumar, Subhankar Karmakar, Terence Chan, Christian Rau
Abstract:
This study aims at proposing an indicator-based framework for assessing social vulnerability of any coastal megacity to floods. The final set of indicators of social vulnerability are chosen from a set of feasible and available indicators which are prepared using a Geographic Information System (GIS) framework on a smaller scale considering 1-km grid cell to provide an insight into the spatial variability of vulnerability. The optimal weight for each individual indicator is assigned using data envelopment analysis (DEA) as it avoids subjective weights and improves the confidence on the results obtained. In order to de-correlate and reduce the dimension of multivariate data, principal component analysis (PCA) has been applied. The proposed methodology is demonstrated on twenty four wards of Mumbai under the jurisdiction of Municipal Corporation of Greater Mumbai (MCGM). This framework of vulnerability assessment is not limited to the present study area, and may be applied to other urban damage centers.Keywords: urban floods, vulnerability, data envelopment analysis, principal component analysis
Procedia PDF Downloads 3631332 Selection of Solid Waste Landfill Site Using Geographical Information System (GIS)
Authors: Fatih Iscan, Ceren Yagci
Abstract:
Rapid population growth, urbanization and industrialization are known as the most important factors of environment problems. Elimination and management of solid wastes are also within the most important environment problems. One of the main problems in solid waste management is the selection of the best site for elimination of solid wastes. Lately, Geographical Information System (GIS) has been used for easing selection of landfill area. GIS has the ability of imitating necessary economical, environmental and political limitations. They play an important role for the site selection of landfill area as a decision support tool. In this study; map layers will be studied for minimum effect of environmental, social and cultural factors and maximum effect for engineering/economical factors for site selection of landfill areas and using GIS for an decision support mechanism in solid waste landfill areas site selection will be presented in Aksaray/TURKEY city, Güzelyurt district practice.Keywords: GIS, landfill, solid waste, spatial analysis
Procedia PDF Downloads 3611331 Dosimetry in Interventional Radiology Examinations for Occupational Exposure Monitoring
Authors: Ava Zarif Sanayei, Sedigheh Sina
Abstract:
Interventional radiology (IR) uses imaging guidance, including X-rays and CT scans, to deliver therapy precisely. Most IR procedures are performed under local anesthesia and start with a small needle being inserted through the skin, which may be called pinhole surgery or image-guided surgery. There is increasing concern about radiation exposure during interventional radiology procedures due to procedure complexity. The basic aim of optimizing radiation protection as outlined in ICRP 139, is to strike a balance between image quality and radiation dose while maximizing benefits, ensuring that diagnostic interpretation is satisfactory. This study aims to estimate the equivalent doses to the main trunk of the body for the Interventional radiologist and Superintendent using LiF: Mg, Ti (TLD-100) chips at the IR department of a hospital in Shiraz, Iran. In the initial stage, the dosimeters were calibrated with the use of various phantoms. Afterward, a group of dosimeters was prepared, following which they were used for three months. To measure the personal equivalent dose to the body, three TLD chips were put in a tissue-equivalent batch and used under a protective lead apron. After the completion of the duration, TLDs were read out by a TLD reader. The results revealed that these individuals received equivalent doses of 387.39 and 145.11 µSv, respectively. The findings of this investigation revealed that the total radiation exposure to the staff was less than the annual limit of occupational exposure. However, it's imperative to implement appropriate radiation protection measures. Although the dose received by the interventional radiologist is a bit noticeable, it may be due to the reason for using conventional equipment with over-couch x-ray tubes for interventional procedures. It is therefore important to use dedicated equipment and protective means such as glasses and screens whenever compatible with the intervention when they are available or have them fitted to equipment if they are not present. Based on the results, the placement of staff in an appropriate location led to increasing the dose to the radiologist. Manufacturing and installation of moveable lead curtains with a thickness of 0.25 millimeters can effectively minimize the radiation dose to the body. Providing adequate training on radiation safety principles, particularly for technologists, can be an optimal approach to further decreasing exposure.Keywords: interventional radiology, personal monitoring, radiation protection, thermoluminescence dosimetry
Procedia PDF Downloads 641330 Shaabi in the City: On Modernizing Sounds and Exclusion in Egyptian Cities
Authors: Mariam Aref Mahmoud
Abstract:
After centuries of historical development, Egypt is no stranger to national identity frustrations. What may or may not be counted as this “national identity” becomes a source of contention. Today, after decades of neoliberal reform, Cairo has become the center of Egypt’s cultural debacle. At its heart, the Egyptian capital serves as Egypt’s extension into global capitalism, its flailing hope to become part of the modernized, cosmopolitan world. Yet, to converge into this image of cosmopolitanism, Cairo must silence the perceived un-modernized sounds, cultures, and spaces that arise from within its alleyways. Currently, the agitation surrounding shaabi music, particularly, that of mahraganat, places these contentions to the center of the modernization debates. This paper will discuss the process through which the conversations between modernization, space, and culture have taken place through a historical analysis of national identity formation under Egypt’s neoliberal regimes. Through this, the paper concludes that music becomes a spatial force through which public space, identity, and globalization must be contested. From these findings researchers can then analyze Cairo through not only its physical landscapes, but also its metaphysical features – such as the soundscape.Keywords: music, space, globalization, Cairo
Procedia PDF Downloads 1161329 Experimental Approach for Determining Hemi-Anechoic Characteristics of Engineering Acoustical Test Chambers
Authors: Santiago Montoya-Ospina, Raúl E. Jiménez-Mejía, Rosa Elvira Correa Gutiérrez
Abstract:
An experimental methodology is proposed for determining hemi-anechoic characteristics of an engineering acoustic room built at the facilities of Universidad Nacional de Colombia to evaluate the free-field conditions inside the chamber. Experimental results were compared with theoretical ones in both, the source and the sound propagation inside the chamber. Acoustic source was modeled by using monopole radiation pattern from punctual sources and the image method was considered for dealing with the reflective plane of the room, that means, the floor without insulation. Finite-difference time-domain (FDTD) method was implemented to calculate the sound pressure value at every spatial point of the chamber. Comparison between theoretical and experimental data yields to minimum error, giving satisfactory results for the hemi-anechoic characterization of the chamber.Keywords: acoustic impedance, finite-difference time-domain, hemi-anechoic characterization
Procedia PDF Downloads 1651328 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1901327 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow
Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden
Abstract:
The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines
Procedia PDF Downloads 931326 Anonymous Editing Prevention Technique Using Gradient Method for High-Quality Video
Authors: Jiwon Lee, Chanho Jung, Si-Hwan Jang, Kyung-Ill Kim, Sanghyun Joo, Wook-Ho Son
Abstract:
Since the advances in digital imaging technologies have led to development of high quality digital devices, there are a lot of illegal copies of copyrighted video content on the internet. Thus, we propose a high-quality (HQ) video watermarking scheme that can prevent these illegal copies from spreading out. The proposed scheme is applied spatial and temporal gradient methods to improve the fidelity and detection performance. Also, the scheme duplicates the watermark signal temporally to alleviate the signal reduction caused by geometric and signal-processing distortions. Experimental results show that the proposed scheme achieves better performance than previously proposed schemes and it has high fidelity. The proposed scheme can be used in broadcast monitoring or traitor tracking applications which need fast detection process to prevent illegally recorded video content from spreading out.Keywords: editing prevention technique, gradient method, luminance change, video watermarking
Procedia PDF Downloads 4591325 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 931324 Arsenic (III) Removal by Zerovalent Iron Nanoparticles Synthesized with the Help of Tea Liquor
Authors: Tulika Malviya, Ritesh Chandra Shukla, Praveen Kumar Tandon
Abstract:
Traditional methods of synthesis are hazardous for the environment and need nature friendly processes for the treatment of industrial effluents and contaminated water. Use of plant parts for the synthesis provides an efficient alternative method. In this paper, we report an ecofriendly and nonhazardous biobased method to prepare zerovalent iron nanoparticles (ZVINPs) using the liquor of commercially available tea. Tea liquor as the reducing agent has many advantages over other polymers. Unlike other polymers, the polyphenols present in tea extract are nontoxic and water soluble at room temperature. In addition, polyphenols can form complexes with metal ions and thereafter reduce the metals. Third, tea extract contains molecules bearing alcoholic functional groups that can be exploited for reduction as well as stabilization of the nanoparticles. Briefly, iron nanoparticles were prepared by adding 2.0 g of montmorillonite K10 (MMT K10) to 5.0 mL of 0.10 M solution of Fe(NO3)3 to which an equal volume of tea liquor was then added drop wise over 20 min with constant stirring. The color of the mixture changed from whitish yellow to black, indicating the formation of iron nanoparticles. The nanoparticles were adsorbed on montmorillonite K10, which is safe and aids in the separation of hazardous arsenic species simply by filtration. Particle sizes ranging from 59.08±7.81 nm were obtained which is confirmed by using different instrumental analyses like IR, XRD, SEM, and surface area studies. Removal of arsenic was done via batch adsorption method. Solutions of As(III) of different concentrations were prepared by diluting the stock solution of NaAsO2 with doubly distilled water. The required amount of in situ prepared ZVINPs supported on MMT K10 was added to a solution of desired strength of As (III). After the solution had been stirred for the preselected time, the solid mass was filtered. The amount of arsenic [in the form of As (V)] remaining in the filtrate was measured using ion chromatograph. Stirring of contaminated water with zerovalent iron nanoparticles supported on montmorillonite K10 for 30 min resulted in up to 99% removal of arsenic as As (III) from its solution at both high and low pH (2.75 and 11.1). It was also observed that, under similar conditions, montmorillonite K10 alone provided only <10% removal of As(III) from water. Adsorption at low pH with precipitation at higher pH has been proposed for As(III) removal.Keywords: arsenic removal, montmorillonite K10, tea liquor, zerovalent iron nanoparticles
Procedia PDF Downloads 1321323 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture
Authors: Joelle Penniston, E. B. Gueguim-Kana
Abstract:
Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.Keywords: biohydrogen, sustainability, suspended, immobilized
Procedia PDF Downloads 3441322 Quantification of Effects of Shape of Basement Topography below the Circular Basin on the Ground Motion Characteristics and Engineering Implications
Authors: Kamal, Dinesh Kumar, J. P. Narayan, Komal Rani
Abstract:
This paper presents the effects of shape of basement topography on the characteristics of the basin-generated surface (BGS) waves and associated average spectral amplification (ASA) in the 3D basins having circular surface area. Seismic responses were computed using a recently developed 3D fourth-order spatial accurate time-domain finite-difference (FD) algorithm based on parsimonious staggered-grid approximation of 3D viscoelastic wave equations. An increase of amplitude amplification and ASA towards the centre of different considered basins was obtained. Further, it may be concluded that ASA in basin very much depends on the impedance contrast, exposure area of basement to the incident wave front, edge-slope, focusing of the BGS-waves and sediment-damping. There is an urgent need of incorporation of a map of differential ground motion (DGM) caused by the BGS-waves as one of the output maps of the seismic microzonation.Keywords: 3D viscoelastic simulation, basin-generated surface waves, maximum displacement, average spectral amplification
Procedia PDF Downloads 3001321 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China
Authors: Bai-Chen Xie, Xian-Peng Chen
Abstract:
China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation
Procedia PDF Downloads 95