Search results for: spatial batch normalization with dropout
1470 Segmental Motion of Polymer Chain at Glass Transition Probed by Single Molecule Detection
Authors: Hiroyuki Aoki
Abstract:
The glass transition phenomenon has been extensively studied for a long time. The glass transition of polymer materials is assigned to the transition of the dynamics of the chain backbone segment. However, the detailed mechanism of the transition behavior of the segmental motion is still unclear. In the current work, the single molecule detection technique was employed to reveal the trajectory of the molecular motion of the single polymer chain. The center segment of poly(butyl methacrylate) chain was labeled by a perylenediimide dye molecule and observed by a highly sensitive fluorescence microscope in a defocus condition. The translational and rotational diffusion of the center segment in a single polymer chain was analyzed near the glass transition temperature. The direct observation of the individual polymer chains revealed the intermittent behavior of the segmental motion, indicating the spatial inhomogeneity.Keywords: glass transition, molecular motion, polymer materials, single molecule
Procedia PDF Downloads 3411469 Treatment of a Galvanization Wastewater in a Fixed-Bed Column Using L. hyperborean and P. canaliculata Macroalgae as Natural Cation Exchangers
Authors: Tatiana A. Pozdniakova, Maria A. P. Cechinel, Luciana P. Mazur, Rui A. R. Boaventura, Vitor J. P. Vilar.
Abstract:
Two brown macroalgae, Laminaria hyperborea and Pelvetia canaliculata, were employed as natural cation exchangers in a fixed-bed column for Zn(II) removal from a galvanization wastewater. The column (4.8 cm internal diameter) was packed with 30-59 g of previously hydrated algae up to a bed height of 17-27 cm. The wastewater or eluent was percolated using a peristaltic pump at a flow rate of 10 mL/min. The effluent used in each experiment presented similar characteristics: pH of 6.7, 55 mg/L of chemical oxygen demand and about 300, 44, 186 and 244 mg/L of sodium, calcium, chloride and sulphate ions, respectively. The main difference was nitrate concentration: 20 mg/L for the effluent used with L. hyperborean and 341 mg/L for the effluent used with P. canaliculata. The inlet zinc concentration also differed slightly: 11.2 mg/L for L. hyperborean and 8.9 mg/L for P. canaliculata experiments. The breakthrough time was approximately 22.5 hours for both macroalgae, corresponding to a service capacity of 43 bed volumes. This indicates that 30 g of biomass is able to treat 13.5 L of the galvanization wastewater. The uptake capacities at the saturation point were similar to that obtained in batch studies (unpublished data) for both algae. After column exhaustion, desorption with 0.1 M HNO3 was performed. Desorption using 9 and 8 bed volumes of eluent achieved an efficiency of 100 and 91%, respectively for L. hyperborean and P. canaliculata. After elution with nitric acid, the column was regenerated using different strategies: i) convert all the binding sites in the sodium form, by passing a solution of 0.5 M NaCl, until achieve a final pH of 6.0; ii) passing only tap water in order to increase the solution pH inside the column until pH 3.0, and in this case the second sorption cycle was performed using protonated algae. In the first approach, in order to remove the excess of salt inside the column, distilled water was passed through the column, leading to the algae structure destruction and the column collapsed. Using the second approach, the algae remained intact during three consecutive sorption/desorption cycles without loss of performance.Keywords: biosorption, zinc, galvanization wastewater, packed-bed column
Procedia PDF Downloads 3141468 Magnetic Field Effects on Parabolic Graphene Quantum Dots with Topological Defects
Authors: Defne Akay, Bekir S. Kandemir
Abstract:
In this paper, we investigate the low-lying energy levels of the two-dimensional parabolic graphene quantum dots (GQDs) in the presence of topological defects with long range Coulomb impurity and subjected to an external uniform magnetic field. The low-lying energy levels of the system are obtained within the framework of the perturbation theory. We theoretically demonstrate that a valley splitting can be controlled by geometrical parameters of the graphene quantum dots and/or by tuning a uniform magnetic field, as well as topological defects. It is found that, for parabolic graphene dots, the valley splitting occurs due to the introduction of spatial confinement. The corresponding splitting is enhanced by the introduction of a uniform magnetic field and it increases by increasing the angle of the cone in subcritical regime.Keywords: coulomb impurity, graphene cones, graphene quantum dots, topological defects
Procedia PDF Downloads 2961467 A Topological Study of an Urban Street Network and Its Use in Heritage Areas
Authors: Jose L. Oliver, Taras Agryzkov, Leandro Tortosa, Jose F. Vicent, Javier Santacruz
Abstract:
This paper aims to demonstrate how a topological study of an urban street network can be used as a tool to be applied to some heritage conservation areas in a city. In the last decades, we find different kinds of approaches in the discipline of Architecture and Urbanism based in the so-called Sciences of Complexity. In this context, this paper uses mathematics from the Network Theory. Hence, it proposes a methodology based in obtaining information from a graph, which is created from a network of urban streets. Then, it is used an algorithm that establishes a ranking of importance of the nodes of that network, from its topological point of view. The results are applied to a heritage area in a particular city, confronting the data obtained from the mathematical model, with the ones from the field work in the case study. As a result of this process, we may conclude the necessity of implementing some actions in the area, and where those actions would be more effective for the whole heritage site.Keywords: graphs, heritage cities, spatial analysis, urban networks
Procedia PDF Downloads 3981466 Learning from TikTok Food Pranks to Promote Food Saving Among Adolescents
Authors: Xuan (Iris) Li, Jenny Zhengye Hou, Greg Hearn
Abstract:
Food waste is a global issue, with an estimated 30% to 50% of food created never being consumed. Therefore, it is vital to reduce food waste and convert wasted food into recyclable outputs. TikTok provides a simple way of creating and duetting videos in just a few steps by using templates with the same sound/vision/caption effects to produce personalized content – this is called a duet, which is revealing to study the impact of TikTok on wasting more food or saving food. The research focuses on examining food-related content on TikTok, with particular attention paid to two distinct themes, food waste pranks and food-saving practices, to understand the potential impacts of these themes on adolescents and their attitudes toward sustainable food consumption practices. Specifically, the analysis explores how TikTok content related to food waste and/or food saving may contribute to the normalization and promotion of either positive or negative food behaviours among young viewers. The research employed content analysis and semi-structured interviews to understand what factors contribute to the difference in popularity between food pranks and food-saving videos and insights from the former can be applied to the latter to increase their communication effectiveness. The first category of food content on TikTok under examination pertains to food waste, including videos featuring pranks and mukbang. These forms of content have the potential to normalize or even encourage food waste behaviours among adolescents, exacerbating the already significant food waste problem. The second category of TikTok food content under examination relates to food saving, for example, videos teaching viewers how to maximize the use of food to reduce waste. This type of content can potentially empower adolescents to act against food waste and foster positive and sustainable food practices in their communities. The initial findings of the study suggest that TikTok content related to pranks appears to be more popular among viewers than content focused on teaching people how to save food. Additionally, these types of videos are gaining fans at a faster rate than content promoting more sustainable food practices. However, we argue there is a great potential for social media platforms like TikTok to play an educative role in promoting positive behaviour change among young people by sharing engaging content suitable to target audiences. This research serves as the first to investigate the potential utility of TikTok in food waste reduction and underscores the important role social media platforms can play in promoting sustainable food practices. The findings will help governments, organizations, and communities promote tailored and effective interventions to reduce food waste and help achieve the United Nations’ sustainable development goal of halving food waste by 2030.Keywords: food waste reduction, behaviour, social media, TikTok, adolescents
Procedia PDF Downloads 791465 Studying the Spatial Aspects of Visual Attention Processing in Global Precedence Paradigm
Authors: Shreya Borthakur, Aastha Vartak
Abstract:
This behavioral experiment aimed to investigate the global precedence phenomenon in a South Asian sample and its correlation with mobile screen time. The global precedence effect refers to the tendency to process overall structure before attending to specific details. Participants completed attention tasks involving global and local stimuli with varying consistencies. The results showed a tendency towards local precedence, but no significant differences in reaction times were found between consistency levels or attention conditions. However, the correlation analysis revealed that participants with higher screen time exhibited a stronger negative correlation with local attention, suggesting that excessive screen usage may impact perceptual organization. Further research is needed to explore this relationship and understand the influence of screen time on cognitive processing.Keywords: global precedence, visual attention, perceptual organization, screen time, cognition
Procedia PDF Downloads 701464 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study
Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang
Abstract:
Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks
Procedia PDF Downloads 2041463 Non-Destructing Testing of Sandstones from Unconventional Reservoir in Poland with Use of Ultrasonic Pulse Velocity Technique and X-Ray Computed Microtomography
Authors: Michał Maksimczuk, Łukasz Kaczmarek, Tomasz Wejrzanowski
Abstract:
This study concerns high-resolution X-ray computed microtomography (µCT) and ultrasonic pulse analysis of Cambrian sandstones from a borehole located in the Baltic Sea Coast of northern Poland. µCT and ultrasonic technique are non-destructive methods commonly used to determine the internal structure of reservoir rock sample. The spatial resolution of the µCT images obtained was 27 µm, which enabled the author to create accurate 3-D visualizations of structure geometry and to calculate the ratio of pores volume to the total sample volume. A copper X-ray source filter was used to reduce image artifacts. Furthermore, samples Young’s modulus and Poisson ratio were obtained with use of ultrasonic pulse technique. µCT and ultrasonic pulse technique provide complex information which can be used for explorations and characterization of reservoir rocks.Keywords: elastic parameters, linear absorption coefficient, northern Poland, tight gas
Procedia PDF Downloads 2521462 Knowledge Integration from Concept to Practice: An Exploratory Study of Designing a Flood Resilient Urban Park in Viet Nam
Authors: To Quyen Le, Oswald Devisch, Tu Anh Trinh, Els Hannes
Abstract:
Urban centres worldwide are affected differently by flooding. In Vietnam this impact is increasingly negative caused by a process of rapid urbanisation. Traditional spatial planning and flood mitigation planning are not able to deal with this growing threat. This article therefore proposes to focus on increasing the participation of local communities in flood control and management. It explores, on the basis of a design studio exercise, how lay knowledge on flooding can be integrated within planning processes. The article presents a theoretical basis for the structured criterion for site selection for a flood resilient urban park from the perspective of science, then discloses the tacit and explicit knowledge of the flood-prone area and finally integrates this knowledge into the design strategies for flood resilient urban park design.Keywords: analytic hierarchy process, AHP, design resilience, flood resilient urban park, knowledge integration
Procedia PDF Downloads 1801461 The Use of a Miniature Bioreactor as Research Tool for Biotechnology Process Development
Authors: Muhammad Zainuddin Arriafdi, Hamudah Hakimah Abdullah, Mohd Helmi Sani, Wan Azlina Ahmad, Muhd Nazrul Hisham Zainal Alam
Abstract:
The biotechnology process development demands numerous experimental works. In laboratory environment, this is typically carried out using a shake flask platform. This paper presents the design and fabrication of a miniature bioreactor system as an alternative research tool for bioprocessing. The working volume of the reactor is 100 ml, and it is made of plastic. The main features of the reactor included stirring control, temperature control via the electrical heater, aeration strategy through a miniature air compressor, and online optical cell density (OD) sensing. All sensors and actuators integrated into the reactor was controlled using an Arduino microcontroller platform. In order to demonstrate the functionality of such miniature bioreactor concept, series of batch Saccharomyces cerevisiae fermentation experiments were performed under various glucose concentrations. Results attained from the fermentation experiments were utilized to solve the Monod equation constants, namely the saturation constant, Ks, and cells maximum growth rate, μmax as to further highlight the usefulness of the device. The mixing capacity of the reactor was also evaluated. It was found that the results attained from the miniature bioreactor prototype were comparable to results achieved using a shake flask. The unique features of the device as compared to shake flask platform is that the reactor mixing condition is much more comparable to a lab-scale bioreactor setup. The prototype is also integrated with an online OD sensor, and as such, no sampling was needed to monitor the progress of the reaction performed. Operating cost and medium consumption are also low and thus, making it much more economical to be utilized for biotechnology process development compared to lab-scale bioreactors.Keywords: biotechnology, miniature bioreactor, research tools, Saccharomyces cerevisiae
Procedia PDF Downloads 1191460 Biodegradation of 2,4-Dichlorophenol by Pseudomonas chlororaphis Strain Isolated from Activated Sludge Sample from a Wastewater Treatment Plant in Durban, South Africa
Authors: Boitumelo Setlhare, Mduduzi P. Mokoena, Ademola O. Olaniran
Abstract:
Agricultural and industrial activities have led to increasing production of xenobiotics such as 2,4-dichlorophenol (2,4-DCP), a derivative of 2,4-dichlorophenoxyacetic acid (2,4-D), which is a widely used herbicide. Bioremediation offers an efficient, cost-effective and environmentally friendly method for degradation of the compound through the activities of the various microbial enzymes involved in the catabolic pathway. The aim of this study was to isolate and characterize bacterial isolate indigenous to contaminated sites in Durban, South Africa for 2,4-DCP degradation. One bacterium capable of utilizing 2,4-DCP as sole carbon source was isolated using culture enrichment technique and identified as Pseudomonas chlororaphis strain UFB2 via PCR amplification and analysis of 16S rRNA gene sequence. This isolate was able to degrade up to 75.11% of 2,4-DCP in batch cultures within 10 days, with the degradation rate constant of 0.14 mg/l/d. Phylogenetic analysis revealed the relatedness of this bacterial isolate to other Pseudomonas sp. previously characterized for chlorophenol degradation. PCR amplification of the catabolic genes involved in 2,4-DCP degradation revealed the presence of the correct amplicons for phenol hydroxylase (600 bp), catechol 1,2-dioxygenase (214 bp), muconate isomerase (851 bp), cis-dienelactone hydrolase (577 bp), and trans-dienelactone hydrolase (491 bp) genes. Enzyme assays revealed activity as high as 21840 mU/mg, 15630 mU/mg, 2340 mU/mg and 1490 mU/mg obtained for phenol hydroxylase, catechol 1,2-dioxygenase, cis-dienelactone hydroxylase and trans-dienelactone hydroxylase, respectively. The absence of catechol 2,3-dioxygenase gene and the corresponding enzyme in this isolate suggests that the organism followed ortho-pathway for 2,4-DCP degradation. Furthermore, the absence of malaycetate reductase genes showed that the bacterium may not be able to completely mineralize 2,4-DCP. Further studies are required to optimize 2,4-DCP degradation by this isolate as well as to elucidate the mechanism of 2,4-DCP degradation.Keywords: biodegradation, catechol 1, 2-dioxygenase, 2, 4-dichlorophenol, phenol hydroxylase, Pseudomonas chlororaphis
Procedia PDF Downloads 2511459 Enhancing Archaeological Sites: Interconnecting Physically and Digitally
Authors: Eleni Maistrou, D. Kosmopoulos, Carolina Moretti, Amalia Konidi, Katerina Boulougoura
Abstract:
InterArch is an ongoing research project that has been running since September 2020. It aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. The research will be completed by July 2023 and will run as a pilot project for the city of Ancient Messene, a place of outstanding natural beauty in the west of Peloponnese, which is considered one of the most important archaeological sites in Greece. The applied research project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user with multiple semantic interpretations. The mingling of the real-world environment with its digital and cultural components by using augmented reality techniques could potentially transform the visit on-site into an immersive multimodal sensory experience. To this purpose, an extensive spatial analysis along with a detailed evaluation of the existing digital and non-digital archives is proposed in our project, intending to correlate natural landscape morphology (including archaeological material remains and environmental characteristics) with the extensive historical records and cultural digital data. On-site research was carried out, during which visitors’ itineraries were monitored and tracked throughout the archaeological visit using GPS locators. The results provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location. InterArch aims to propose the design of a site-based digital application for archaeological sites and outdoor guided tours, supporting virtual and augmented reality technology. Extensive spatial analysis, along with a detailed evaluation of the existing digital and non-digital archives, is used in our project, intending to correlate natural landscape morphology with the extensive historical records and cultural digital data. The results of the on-site research provide our project with useful insight concerning the way visitors engage and interact with their surroundings, depending on the sequence of their itineraries and the duration of stay at each location.Keywords: archaeological site, digital space, semantic interpretations, cultural heritage
Procedia PDF Downloads 731458 Global Based Histogram for 3D Object Recognition
Authors: Somar Boubou, Tatsuo Narikiyo, Michihiro Kawanishi
Abstract:
In this work, we address the problem of 3D object recognition with depth sensors such as Kinect or Structure sensor. Compared with traditional approaches based on local descriptors, which depends on local information around the object key points, we propose a global features based descriptor. Proposed descriptor, which we name as Differential Histogram of Normal Vectors (DHONV), is designed particularly to capture the surface geometric characteristics of the 3D objects represented by depth images. We describe the 3D surface of an object in each frame using a 2D spatial histogram capturing the normalized distribution of differential angles of the surface normal vectors. The object recognition experiments on the benchmark RGB-D object dataset and a self-collected dataset show that our proposed descriptor outperforms two others descriptors based on spin-images and histogram of normal vectors with linear-SVM classifier.Keywords: vision in control, robotics, histogram, differential histogram of normal vectors
Procedia PDF Downloads 2811457 Attention-Based Spatio-Temporal Approach for Fire and Smoke Detection
Authors: Alireza Mirrashid, Mohammad Khoshbin, Ali Atghaei, Hassan Shahbazi
Abstract:
In various industries, smoke and fire are two of the most important threats in the workplace. One of the common methods for detecting smoke and fire is the use of infrared thermal and smoke sensors, which cannot be used in outdoor applications. Therefore, the use of vision-based methods seems necessary. The problem of smoke and fire detection is spatiotemporal and requires spatiotemporal solutions. This paper presents a method that uses spatial features along with temporal-based features to detect smoke and fire in the scene. It consists of three main parts; the task of each part is to reduce the error of the previous part so that the final model has a robust performance. This method also uses transformer modules to increase the accuracy of the model. The results of our model show the proper performance of the proposed approach in solving the problem of smoke and fire detection and can be used to increase workplace safety.Keywords: attention, fire detection, smoke detection, spatio-temporal
Procedia PDF Downloads 2051456 Analysis of Tools for Revitalization and Rehabilitation of Brownfields
Authors: Jiří Kugl
Abstract:
Typology and specific opportunities of brownfield revitalization are already largely described. Challenges and opportunities that brownfields represent have been adequately studied and presented, as well as specific ways in which these areas can be used or how they are used abroad. In other words, the questions why (revitalize brownfields) and what (we should do with them) are satisfactorily answered, but the question how (we can work with them) is not. This work will focus on answering this question, which will deal with tools that enable the revitalization and rehabilitation projects in the area. Tools can be divided, for example in terms of spatial planning and urban design, from an environmental perspective, from the perspective of cultural heritage protection and from the perspective of investment opportunities. The result is that the issue of brownfields is handled by numerous institutions and instruments. The aim of this paper is to identify, classify and analyze these instruments. Paper will study instruments from other countries with long-term experience with this issue (eg. France, Great Britain, USA, Germany, Denmark, Czech Republic) and analyse their contribution and the feasibility of their implementation in other countries.Keywords: brownfields, revitalization, rehabilitation, tools, urban planning
Procedia PDF Downloads 2421455 Utilizing IoT for Waste Collection: A Review of Technologies for Eco-Friendly Waste Management
Authors: Fatemehsadat Mousaviabarbekouh
Abstract:
Population growth and changing consumption patterns have led to waste management becoming a significant global challenge. With projections indicating that nearly 67% of the Earth's population will live in megacities by 2050, there is a pressing need for smart solutions to address citizens' demands. Waste collection, facilitated by the Internet of Things (IoT), offers an efficient and cost-effective approach. This study aims to review the utilization of IoT for waste collection and explore technologies that promote eco-friendly waste management. The research focuses on information and communication technologies (ICTs), including spatial, identification, acquisition, and data communication technologies. Additionally, the study examines various energy harvesting technologies to further reduce costs. The findings indicate that the application of these technologies can lead to significant cost savings, energy efficiency, and ultimately reshape the future of waste management.Keywords: waste collection, IoT, smart cities, eco-friendly, information and communication technologies, energy harvesting
Procedia PDF Downloads 1151454 Hydrogeological Study of the Different Aquifers in the Area of Biskra
Authors: A. Sengouga, Y. Imessaoudene, A. Semar, B. Mouhouche, M. Kadir
Abstract:
Biskra or Zibans, is located in a structural transition zone between the chain of the Saharan Atlas Mountains and the Sahara. It is an arid region where the superficial water resource is the mild, hence the importance of the lithological description and the evaluation of aquifers rock’s volumes, which are highly dependent on the mobilized water contained in the various reservoirs (Quaternary, Mio-Pliocene, Eocene and Continental intercalary). Through a data synthesis which is particularly based on stratigraphic logs of drilling, the description of aquifers heterogeneity and the determining of the spatial variability of aquifer appearance became possible, by using geostatistical analysis, which allowed the representation of the aquifer thicknesses mapping and their space variation. The different thematic maps realized focus on drilling position, the substratum shape and finally the aquifers thicknesses of the region. It is found that the high density of water points especially these of drilling points are superposed on the hydrologic reservoirs with significant thicknesses.Keywords: log stratigraphic ArcGIS 10, geometry of aquifers, rocks reservoir volume, Biskra
Procedia PDF Downloads 4611453 Hazardous Effects of Metal Ions on the Thermal Stability of Hydroxylammonium Nitrate
Authors: Shweta Hoyani, Charlie Oommen
Abstract:
HAN-based liquid propellants are perceived as potential substitute for hydrazine in space propulsion. Storage stability for long service life in orbit is one of the key concerns for HAN-based monopropellants because of its reactivity with metallic and non-metallic impurities which could entrain from the surface of fuel tanks and the tubes. The end result of this reactivity directly affects the handling, performance and storability of the liquid propellant. Gaseous products resulting from the decomposition of the propellant can lead to deleterious pressure build up in storage vessels. The partial loss of an energetic component can change the ignition and the combustion behavior and alter the performance of the thruster. The effect of largely plausible metals- iron, copper, chromium, nickel, manganese, molybdenum, zinc, titanium and cadmium on the thermal decomposition mechanism of HAN has been investigated in this context. Studies involving different concentrations of metal ions and HAN at different preheat temperatures have been carried out. Effect of metal ions on the decomposition behavior of HAN has been studied earlier in the context of use of HAN as gun propellant. However the current investigation pertains to the decomposition mechanism of HAN in the context of use of HAN as monopropellant for space propulsion. Decomposition onset temperature, rate of weight loss, heat of reaction were studied using DTA- TGA and total pressure rise and rate of pressure rise during decomposition were evaluated using an in-house built constant volume batch reactor. Besides, reaction mechanism and product profile were studied using TGA-FTIR setup. Iron and copper displayed the maximum reaction. Initial results indicate that iron and copper shows sensitizing effect at concentrations as low as 50 ppm with 60% HAN solution at 80°C. On the other hand 50 ppm zinc does not display any effect on the thermal decomposition of even 90% HAN solution at 80°C.Keywords: hydroxylammonium nitrate, monopropellant, reaction mechanism, thermal stability
Procedia PDF Downloads 4241452 Development Process and Design Methods for Shared Spaces in Europe
Authors: Kazuyasu Yoshino, Keita Yamaguchi, Toshihiko Nishimura, Masashi Kawasaki
Abstract:
Shared Space, the planning and design concept that allows pedestrians and vehicles to coexist in a street space, has been advocated and developed according to the traffic conditions in each country in Europe. Especially in German/French-speaking countries, the "Meeting Zone," which is a traffic rule combining speed regulation (20km/h) and pedestrian priority, is often applied when designing shared spaces at intersections, squares, and streets in the city center. In this study, the process of establishment and development of the Meeting Zone in Switzerland, France, and Austria was chronologically organized based on the descriptions in the major discourse and guidelines in each country. Then, the characteristics of the spatial design were extracted by analyzing representative examples of Meeting Zone applications. Finally, the relationships between the different approaches to designing of Meeting Zone and traffic regulations in different countries were discussed.Keywords: shared space, traffic calming, meeting zone, street design
Procedia PDF Downloads 951451 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education
Authors: Gleyce Assis Da Silva Barbosa
Abstract:
In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.Keywords: financialization, oligopolies, education, Brazil
Procedia PDF Downloads 651450 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels
Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe
Abstract:
The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa
Procedia PDF Downloads 2331449 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 1711448 Static vs. Stream Mining Trajectories Similarity Measures
Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh
Abstract:
Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining
Procedia PDF Downloads 3961447 Carbon, Nitrogen Doped TiO2 Macro/Mesoporous Monoliths with High Visible Light Absorption for Photocatalytic Wastewater Treatment
Authors: Paolo Boscaro, Vasile Hulea, François Fajula, Francis Luck, Anne Galarneau
Abstract:
TiO2 based monoliths with hierarchical macropores and mesopores have been synthesized following a novel one pot sol-gel synthesis method. Taking advantage of spinodal separation that occurs between titanium isopropoxide and an acidic solution in presence of polyethylene oxide polymer, monoliths with homogeneous interconnected macropres of 3 μm in diameter and mesopores of ca. 6 nm (surface area 150 m2/g) are obtained. Furthermore, these monoliths present some carbon and nitrogen (as shown by XPS and elemental analysis), which considerably reduce titanium oxide energy gap and enable light to be absorbed up to 700 nm wavelength. XRD shows that anatase is the dominant phase with a small amount of brookite. Enhanced light absorption and high porosity of the monoliths are responsible for a remarkable photocatalytic activity. Wastewater treatment has been performed in closed reactor under sunlight using orange G dye as target molecule. Glass reactors guarantee that most of UV radiations (to almost 300 nm) of solar spectrum are excluded. TiO2 nanoparticles P25 (usually used in photocatalysis under UV) and un-doped TiO2 monoliths with similar porosity were used as comparison. C,N-doped TiO2 monolith allowed a complete colorant degradation in less than 1 hour, whereas 10 h are necessary for 40% colorant degradation with P25 and un-doped monolith. Experiment performed in the dark shows that only 3% of molecules have been adsorbed in the C,N-doped TiO2 monolith within 1 hour. The much higher efficiency of C,N-doped TiO2 monolith in comparison to P25 and un-doped monolith, proves that doping TiO2 is an essential issue and that nitrogen and carbon are effective dopants. Monoliths offer multiples advantages in respect to nanometric powders: sample can be easily removed from batch (no needs to filter or to centrifuge). Moreover flow reactions can be set up with cylindrical or flat monoliths by simple sheathing or by locking them with O-rings.Keywords: C-N doped, sunlight photocatalytic activity, TiO2 monolith, visible absorbance
Procedia PDF Downloads 2321446 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction
Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour
Abstract:
In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift
Procedia PDF Downloads 3171445 Modeling and Tracking of Deformable Structures in Medical Images
Authors: Said Ettaieb, Kamel Hamrouni, Su Ruan
Abstract:
This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method.Keywords: active shape model, a priori knowledge, spatiotemporal shape variation, deformable structures, medical images
Procedia PDF Downloads 3431444 Visuospatial Perspective Taking and Theory of Mind in a Clinical Approach: Development of a Task for Adults
Authors: Britt Erni, Aldara Vazquez Fernandez, Roland Maurer
Abstract:
Visuospatial perspective taking (VSPT) is a process that allows to integrate spatial information from different points of view, and to transform the mental images we have of the environment to properly orient our movements and anticipate the location of landmarks during navigation. VSPT is also related to egocentric perspective transformations (imagined rotations or translations of one's point of view) and to infer the visuospatial experiences of another person (e.g. if and how another person sees objects). This process is deeply related to a wide-ranging capacity called the theory of mind (ToM), an essential cognitive function that allows us to regulate our social behaviour by attributing mental representations to individuals in order to make behavioural predictions. VSPT is often considered in the literature as the starting point of the development of the theory of mind. VSPT and ToM include several levels of knowledge that have to be assessed by specific tasks. Unfortunately, the lack of tasks assessing these functions in clinical neuropsychology leads to underestimate, in brain-damaged patients, deficits of these functions which are essential, in everyday life, to regulate our social behaviour (ToM) and to navigate in known and unknown environments (VSPT). Therefore, this study aims to create and standardize a VSPT task in order to explore the cognitive requirements of VSPT and ToM, and to specify their relationship in healthy adults and thereafter in brain-damaged patients. Two versions of a computerized VSPT task were administered to healthy participants (M = 28.18, SD = 4.8 years). In both versions the environment was a 3D representation of 10 different geometric shapes placed on a circular base. Two sets of eight pictures were generated from this: of the environment with an avatar somewhere on its periphery (locations) and of what the avatar sees from that place (views). Two types of questions were asked: a) identify the location from the view, and b) identify the view from the location. Twenty participants completed version 1 of the task and 20 completed the second version, where the views were offset by ±15° (i.e., clockwise or counterclockwise) and participants were asked to choose the closest location or the closest view. The preliminary findings revealed that version 1 is significantly easier than version 2 for accuracy (with ceiling scores for version 1). In version 2, participants responded significantly slower when they had to infer the avatar's view from the latter's location, probably because they spent more time visually exploring the different views (responses). Furthermore, men significantly performed better than women in version 1 but not in version 2. Most importantly, a sensitive task (version 2) has been created for which the participants do not seem to easily and automatically compute what someone is looking at yet which does not involve more heavily other cognitive functions. This study is further completed by including analysis on non-clinical participants with low and high degrees of schizotypy, different socio-educational status, and with a range of older adults to examine age-related and other differences in VSPT processing.Keywords: mental transformation, spatial cognition, theory of mind, visuospatial perspective taking
Procedia PDF Downloads 2051443 Investigating the Chemical Structure of Drinking Water in Domestic Areas of Kuwait by Appling GIS Technology
Authors: H. Al-Jabli
Abstract:
The research on the presence of heavy metals and bromate in drinking water is of immense scientific significance due to the potential risks these substances pose to public health. These contaminants are subject to regulatory limits outlined by the National Primary Drinking Water Regulations. Through a comprehensive analysis involving the compilation of existing data and the collection of new data via water sampling in residential areas of Kuwait, the aim is to create detailed maps illustrating the spatial distribution of these substances. Furthermore, the investigation will utilize GRAPHER software to explore correlations among different chemical parameters. By implementing rigorous scientific methodologies, the research will provide valuable insights for the Ministry of Electricity and Water and the Ministry of Health. These insights can inform evidence-based decision-making, facilitate the implementation of corrective measures, and support strategic planning for future infrastructure activities.Keywords: heavy metals, bromate, ozonation, GIS
Procedia PDF Downloads 881442 Optimization of Titanium Leaching Process Using Experimental Design
Authors: Arash Rafiei, Carroll Moore
Abstract:
Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.Keywords: titanium leaching, optimization, experimental design, performance analysis
Procedia PDF Downloads 3761441 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: land use, spatial resolution, WRF-Chem, air quality assessment
Procedia PDF Downloads 159