Search results for: blasting vibration constant
250 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 231249 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 307248 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177247 Effects of Centrifugation, Encapsulation Method and Different Coating Materials on the Total Antioxidant Activity of the Microcapsules of Powdered Cherry Laurels
Authors: B. Cilek Tatar, G. Sumnu, M. Oztop, E. Ayaz
Abstract:
Encapsulation protects sensitive food ingredients against heat, oxygen, moisture and pH until they are released to the system. It can mask the unwanted taste of nutrients that are added to the foods for fortification purposes. Cherry laurels (Prunus laurocerasus) contain phenolic compounds which decrease the proneness to several chronic diseases such as types of cancer and cardiovascular diseases. The objective of this research was to study the effects of centrifugation, different coating materials and homogenization methods on microencapsulation of powders obtained from cherry laurels. In this study, maltodextrin and mixture of maltodextrin:whey protein with a ratio of 1:3 (w/w) were chosen as coating materials. Total solid content of coating materials was kept constant as 10% (w/w). Capsules were obtained from powders of freeze-dried cherry laurels through encapsulation process by silent crusher homogenizer or microfluidization. Freeze-dried cherry laurels were core materials and core to coating ratio was chosen as 1:10 by weight. To homogenize the mixture, high speed homogenizer was used at 4000 rpm for 5 min. Then, silent crusher or microfluidizer was used to complete encapsulation process. The mixtures were treated either by silent crusher for 1 min at 75000 rpm or microfluidizer at 50 MPa for 3 passes. Freeze drying for 48 hours was applied to emulsions to obtain capsules in powder form. After these steps, dry capsules were grounded manually into a fine powder. The microcapsules were analyzed for total antioxidant activity with DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging method. Prior to high speed homogenization, the samples were centrifuged (4000 rpm, 1 min). Centrifugation was found to have positive effect on total antioxidant activity of capsules. Microcapsules treated by microfluidizer were found to have higher total antioxidant activities than those treated by silent crusher. It was found that increasing whey protein concentration in coating material (using maltodextrin:whey protein 1:3 mixture) had positive effect on total antioxidant activity for both silent crusher and microfluidization methods. Therefore, capsules prepared by microfluidization of centrifuged mixtures can be selected as the best conditions for encapsulation of cherry laurel powder by considering their total antioxidant activity. In this study, it was shown that capsules prepared by these methods can be recommended to be incorporated into foods in order to enhance their functionality by increasing antioxidant activity.Keywords: antioxidant activity, cherry laurel, microencapsulation, microfluidization
Procedia PDF Downloads 295246 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities
Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores
Abstract:
The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design
Procedia PDF Downloads 62245 Anajaa-Visual Substitution System: A Navigation Assistive Device for the Visually Impaired
Authors: Juan Pablo Botero Torres, Alba Avila, Luis Felipe Giraldo
Abstract:
Independent navigation and mobility through unknown spaces pose a challenge for the autonomy of visually impaired people (VIP), who have relied on the use of traditional assistive tools like the white cane and trained dogs. However, emerging visually assistive technologies (VAT) have proposed several human-machine interfaces (HMIs) that could improve VIP’s ability for self-guidance. Hereby, we introduce the design and implementation of a visually assistive device, Anajaa – Visual Substitution System (AVSS). This system integrates ultrasonic sensors with custom electronics, and computer vision models (convolutional neural networks), in order to achieve a robust system that acquires information of the surrounding space and transmits it to the user in an intuitive and efficient manner. AVSS consists of two modules: the sensing and the actuation module, which are fitted to a chest mount and belt that communicate via Bluetooth. The sensing module was designed for the acquisition and processing of proximity signals provided by an array of ultrasonic sensors. The distribution of these within the chest mount allows an accurate representation of the surrounding space, discretized in three different levels of proximity, ranging from 0 to 6 meters. Additionally, this module is fitted with an RGB-D camera used to detect potentially threatening obstacles, like staircases, using a convolutional neural network specifically trained for this purpose. Posteriorly, the depth data is used to estimate the distance between the stairs and the user. The information gathered from this module is then sent to the actuation module that creates an HMI, by the means of a 3x2 array of vibration motors that make up the tactile display and allow the system to deliver haptic feedback. The actuation module uses vibrational messages (tactones); changing both in amplitude and frequency to deliver different awareness levels according to the proximity of the obstacle. This enables the system to deliver an intuitive interface. Both modules were tested under lab conditions, and the HMI was additionally tested with a focal group of VIP. The lab testing was conducted in order to establish the processing speed of the computer vision algorithms. This experimentation determined that the model can process 0.59 frames per second (FPS); this is considered as an adequate processing speed taking into account that the walking speed of VIP is 1.439 m/s. In order to test the HMI, we conducted a focal group composed of two females and two males between the ages of 35-65 years. The subject selection was aided by the Colombian Cooperative of Work and Services for the Sightless (COOTRASIN). We analyzed the learning process of the haptic messages throughout five experimentation sessions using two metrics: message discrimination and localization success. These correspond to the ability of the subjects to recognize different tactones and locate them within the tactile display. Both were calculated as the mean across all subjects. Results show that the focal group achieved message discrimination of 70% and a localization success of 80%, demonstrating how the proposed HMI leads to the appropriation and understanding of the feedback messages, enabling the user’s awareness of its surrounding space.Keywords: computer vision on embedded systems, electronic trave aids, human-machine interface, haptic feedback, visual assistive technologies, vision substitution systems
Procedia PDF Downloads 83244 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 130243 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 132242 Shift from Distance to In-Person Learning of Indigenous People’s Schools during the COVID 19 Pandemic: Gains and Challenges
Authors: May B. Eclar, Romeo M. Alip, Ailyn C. Eay, Jennifer M. Alip, Michelle A. Mejica, Eloy C.eclar
Abstract:
The COVID-19 pandemic has significantly changed the educational landscape of the Philippines. The groups affected by these changes are the poor and those living in the Geographically Isolated and Depressed Areas (GIDA), such as the Indigenous Peoples (IP). This was heavily experienced by the ten IP schools in Zambales, a province in the country. With this in mind, plus other factors relative to safety, the Schools Division of Zambales selected these ten schools to conduct the pilot implementation of in-person classes two (2) years after the country-wide school closures. This study aimed to explore the lived experiences of the school heads of the first ten Indigenous People’s (IP) schools that shifted from distance learning to limited in-person learning. These include the challenges met and the coping mechanism they set to overcome the challenges. The study is linked to experiential learning theory as it focuses on the idea that the best way to learn things is by having experiences). It made use of qualitative research, specifically phenomenology. All the ten school heads from the IP schools were chosen as participants in the study. Afterward, participants underwent semi-structured interviews, both individual and focus group discussions, for triangulation. Data were analyzed through thematic analysis. As a result, the study found that most IP schools did not struggle to convince parents to send their children back to school as they downplay the pandemic threat due to their geographical location. The parents struggled the most during modular learning since many of them are either illiterate, too old to teach their children, busy with their lands, or have too many children to teach. Moreover, there is a meager vaccination rate in the ten barangays where the schools are located because of local beliefs. In terms of financial needs, school heads did not find it difficult even though funding is needed to adjust the schools to the new normal because of the financial support coming from the central office. Technical assistance was also provided to the schools by division personnel. Teachers also welcomed the idea of shifting back to in-person classes, and minor challenges were met but were solved immediately through various mechanisms. Learning losses were evident since most learners struggled with essential reading, writing, and counting skills. Although the community has positively received the conduct of in-person classes, the challenges these IP schools have been experiencing pre-pandemic were also exacerbated due to the school closures. It is therefore recommended that constant monitoring and provision of support must continue to solve other challenges the ten IP schools are still experiencing due to in-person classesKeywords: In-person learning, indigenous peoples, phenomenology, philippines
Procedia PDF Downloads 111241 Application of Geosynthetics for the Recovery of Located Road on Geological Failure
Authors: Rideci Farias, Haroldo Paranhos
Abstract:
The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure
Procedia PDF Downloads 170240 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 59239 Effect of Accelerated Aging on Antibacterial and Mechanical Properties of SEBS Compounds
Authors: Douglas N. Simoes, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana
Abstract:
Thermoplastic elastomers (TPE) compounds are used in a wide range of applications, like home appliances, automotive components, medical devices, footwear, and others. These materials are susceptible to microbial attack, causing a crack in polymer chains compounds based on SEBS copolymers, poly (styrene-b-(ethylene-co-butylene)-b-styrene, are a class of TPE, largely used in domestic appliances like refrigerator seals (gaskets), bath mats and sink squeegee. Moisture present in some areas (such as shower area and sink) in addition to organic matter provides favorable conditions for microbial survival and proliferation, contributing to the spread of diseases besides the reduction of product life cycle due the biodegradation process. Zinc oxide (ZnO) has been studied as an alternative antibacterial additive due its biocidal effect. It is important to know the influence of these additives in the properties of the compounds, both at the beginning and during the life cycle. In that sense, the aim of this study was to evaluate the effect of accelerated aging in oven on antibacterial and mechanical properties of ZnO loaded SEBS based TPE compounds. Two different comercial zinc oxide, named as WR and Pe were used in proportion of 1%. A compound with no antimicrobial additive (standard) was also tested. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials, screw rotation rate was set at 226 rpm, with a temperature profile from 150 to 190 ºC. Test specimens were prepared using the injection molding machine at 190 ºC. The Standard Test Method for Rubber Property—Effect of Liquids was applied in order to simulate the exposition of TPE samples to detergent ingredients during service. For this purpose, ZnO loaded TPE samples were immersed in a 3.0% w/v detergent (neutral) and accelerated aging in oven at 70°C for 7 days. Compounds were characterized by changes in mechanical (hardness and tension properties) and mass. The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The microbiological tests showed a reduction up to 42% in E. coli and up to 49% in S. aureus population in non-aged samples. There were observed variations in elongation and hardness values with the addition of zinc The changes in tensile at rupture and mass were not significant between non-aged and aged samples.Keywords: antimicrobial, domestic appliance, sebs, zinc oxide
Procedia PDF Downloads 247238 Extraction and Electrochemical Behaviors of Au(III) using Phosphonium-Based Ionic Liquids
Authors: Kyohei Yoshino, Masahiko Matsumiya, Yuji Sasaki
Abstract:
Recently, studies have been conducted on Au(III) extraction using ionic liquids (ILs) as extractants or diluents. ILs such as piperidinium, pyrrolidinium, and pyridinium have been studied as extractants for noble metal extractions. Furthermore, the polarity, hydrophobicity, and solvent miscibility of these ILs can be adjusted depending on their intended use. Therefore, the unique properties of ILs make them functional extraction media. The extraction mechanism of Au(III) using phosphonium-based ILs and relevant thermodynamic studies are yet to be reported. In the present work, we focused on the mechanism of Au(III) extraction and related thermodynamic analyses using phosphonium-based ILs. Triethyl-n-pentyl, triethyl-n-octyl, and triethyl-n-dodecyl phosphonium bis(trifluoromethyl-sulfonyl)amide, [P₂₂₂ₓ][NTf₂], (X = 5, 8, and 12) were investigated for Au(III) extraction. The IL–Au complex was identified as [P₂₂₂₅][AuCl₄] using UV–Vis–NIR and Raman spectroscopic analyses. The extraction behavior of Au(III) was investigated with a change in the [P₂₂₂ₓ][NTf₂]IL concentration from 1.0 × 10–4 to 1.0 × 10–1 mol dm−3. The results indicate that Au(III) can be easily extracted by the anion-exchange reaction in the [P₂₂₂ₓ][NTf₂]IL. The slope range 0.96–1.01 on the plot of log D vs log[P₂₂₂ₓ][NTf2]IL indicates the association of one mole of IL with one mole of [AuCl4−] during extraction. Consequently, [P₂₂₂ₓ][NTf₂] is an anion-exchange extractant for the extraction of Au(III) in the form of anions from chloride media. Thus, this type of phosphonium-based IL proceeds via an anion exchange reaction with Au(III). In order to evaluate the thermodynamic parameters on the Au(III) extraction, the equilibrium constant (logKₑₓ’) was determined from the temperature dependence. The plot of the natural logarithm of Kₑₓ’ vs the inverse of the absolute temperature (T–1) yields a slope proportional to the enthalpy (ΔH). By plotting T–1 vs lnKₑₓ’, a line with a slope range 1.129–1.421 was obtained. Thus, the result indicated that the extraction reaction of Au(III) using the [P₂₂₂ₓ][NTf₂]IL (X=5, 8, and 12) was exothermic (ΔH=-9.39〜-11.81 kJ mol-1). The negative value of TΔS (-4.20〜-5.27 kJ mol-1) indicates that microscopic randomness is preferred in the [P₂₂₂₅][NTf₂]IL extraction system over [P₂₂₂₁₂][NTf₂]IL. The total negative alternation in Gibbs energy (-5.19〜-6.55 kJ mol-1) for the extraction reaction would thus be relatively influenced by the TΔS value on the number of carbon atoms in the alkyl side length, even if the efficiency of ΔH is significantly influenced by the total negative alternations in Gibbs energy. Electrochemical analysis revealed that extracted Au(III) can be reduced in two steps: (i) Au(III)/Au(I) and (ii) Au(I)/Au(0). The diffusion coefficients of the extracted Au(III) species in [P₂₂₂ₓ][NTf₂] (X = 5, 8, and 12) were evaluated from 323 to 373 K using semi-integral and semi-differential analyses. Because of the viscosity of the IL medium, the diffusion coefficient of the extracted Au(III) increases with increasing alkyl chain length. The 4f7/2 spectrum based on X-ray photoelectron spectroscopy revealed that the Au electrodeposits obtained after 10 cycles of continuous extraction and electrodeposition were in the metallic state.Keywords: au(III), electrodeposition, phosphonium-based ionic liquids, solvent extraction
Procedia PDF Downloads 107237 Labor Welfare and Social Security
Authors: Shoaib Alvi
Abstract:
Mahatma Gandhi was said “Man becomes great exactly in the degree in which he works for the welfare of his fellow-men”. Labor welfare is an important fact of Industrial relations. With the growth of industrialization, mechanization and computerization, labor welfare measures have got the fillip. The author believes that Labor welfare includes provisions of various facilities and amenities in and around the work place for the better life of the workers. Labor welfare is, thus, one of the major determinants of industrial relations. It comprises all human efforts the work place for the better life of the worker. The social and economic aspects of the life of the workers have the direct influence on the social and economic development of the nation. Author thinks that there could be multiple objectives in having, labor welfare programme the concern for improving the lot of the workers, a philosophy of humanitarianism or internal social responsibility, a feeling of concern, and caring by providing some of life's basic amenities, besides the basic pay packet. Such caring is supposed to build a sense of loyalty on the part of the employee towards the organization. The author thinks that Social security is the security that the State furnishes against the risks which an individual of small means cannot today, stand up to by himself even in private combination with his fellows. Social security is one of the pillars on which the structure of a welfare state rests, and it constitutes the hardcore of social policy in most countries. It is through social security measures that the state attempts to maintain every citizen at a certain prescribed level below which no one is allowed to fall. According to author, social assistance is a method according to which benefits are given to the needy persons, fulfilling the prescribed conditions, by the government out of its own resources. Author has analyzed and studied the relationship between the labor welfare social security and also studied various international conventions on provisions of social security by International Authorities like United Nations, International Labor Organization, and European Union etc. Author has also studied and analyzed concept of labor welfare and social security schemes of many countries around the globe ex:- Social security in Australia, Social security in Switzerland, Social Security (United States), Mexican Social Security Institute, Welfare in Germany, Social security schemes of India for labor welfare in both organized sector and unorganized sector. In this Research paper, Author has done the study on the Conceptual framework of the Labour Welfare. According to author, labors are highly perishable, which need constant welfare measures for their upgradation and performance in this field. At last author has studied role of trade unions and labor welfare unions and other institutions working for labor welfare, in this research paper author has also identified problems these Unions and labor welfare bodies’ face and tried to find out solutions for the problems and also analyzed various steps taken by the government of various countries around the globe.Keywords: labor welfare, internal social responsibility, social security, international conventions
Procedia PDF Downloads 577236 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata
Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya
Abstract:
The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment
Procedia PDF Downloads 108235 Encouraging the Uptake of Entrepreneurship by Graduates of Higher Education Institutions in South Africa
Authors: Chux Gervase Iwu, Simon Nsengimane
Abstract:
Entrepreneurship stimulates socio-economic development in many countries, if not all. It creates jobs and decreases unemployment and inequality. There are other benefits that are accruable from entrepreneurship, namely the empowerment of women and the promotion of better livelihoods. Innovation has become a weapon for business competition, growth, and sustainability. Paradoxically, it is a threat to businesses because products can be duplicated; new products may decrease the market share of existing ones or delete them from the market. This creates a constant competitive environment that calls for updates, innovation, and the invention of new products and services. Thus, the importance of higher education in instilling a good entrepreneurial mindset in students has become even more critical. It can be argued that the business environment is under enormous pressure from several factors, including the fourth industrial revolution, which calls for the adoption and use of information and communication technology, which is the catalyst for many innovations and organisational changes. Therefore, it is crucial that higher education students are equipped with relevant knowledge and skills to respond effectively to the needs of the business environment and create a vibrant entrepreneurship ecosystem. In South Africa, entrepreneurship education or some form of it has been a privilege for economic and management fields of study, leaving behind other fields. Entrepreneurship should not be limited to business faculties but rather extended to other fields of study. This is perhaps the reason for low levels of entrepreneurship uptake among South African graduates if they are compared with the graduates in other countries. There may be other reasons for the low entrepreneurship uptake. Some of these have been documented in extant literature to include (1) not enough time was spent teaching entrepreneurship in the business faculties, (2) the skills components in the curricula are insufficient, and (3) the overall attitudes/mindsets necessary to establish and run sustainable enterprises seem absent. Therefore, four important areas are recognised as crucial for the effective implementation of entrepreneurship education: policy, private sector engagement, curriculum development, and teacher development. The purpose of this research is to better comprehend the views, aspirations, and expectations of students and faculty members to design an entrepreneurial teaching model for higher education institutions. A qualitative method will be used to conduct a purposive interview with undergraduate and graduate students in select higher institutions. Members of faculty will also be included in the sample as well as, where possible, two or more government personnel responsible for higher education policy development. At present, interpretative analysis is proposed for the analysis of the interviews with the support of Atlas Ti. It is hoped that an entrepreneurship education model in the South African context is realised through this study.Keywords: entrepreneurship education, higher education institution, graduate unemployment, curriculum development
Procedia PDF Downloads 79234 Light-Controlled Gene Expression in Yeast
Authors: Peter. M. Kusen, Georg Wandrey, Christopher Probst, Dietrich Kohlheyer, Jochen Buchs, Jorg Pietruszkau
Abstract:
Light as a stimulus provides the capability to develop regulation techniques for customizable gene expression. A great advantage is the extremely flexible and accurate dosing that can be performed in a non invasive and sterile manner even for high throughput technologies. Therefore, light regulation in a multiwell microbioreactor system was realized providing the opportunity to control gene expression with outstanding complexity. A light-regulated gene expression system in Saccharomyces cerevisiae was designed applying the strategy of caged compounds. These compounds are photo-labile protected and therefore biologically inactive regulator molecules which can be reactivated by irradiation with certain light conditions. The “caging” of a repressor molecule which is consumed after deprotection was essential to create a flexible expression system. Thereby, gene expression could be temporally repressed by irradiation and subsequent release of the active repressor molecule. Afterwards, the repressor molecule is consumed by the yeast cells leading to reactivation of gene expression. A yeast strain harboring a construct with the corresponding repressible promoter in combination with a fluorescent marker protein was applied in a Photo-BioLector platform which allows individual irradiation as well as online fluorescence and growth detection. This device was used to precisely control the repression duration by adjusting the amount of released repressor via different irradiation times. With the presented screening platform the regulation of complex expression procedures was achieved by combination of several repression/derepression intervals. In particular, a stepwise increase of temporally-constant expression levels was demonstrated which could be used to study concentration dependent effects on cell functions. Also linear expression rates with variable slopes could be shown representing a possible solution for challenging protein productions, whereby excessive production rates lead to misfolding or intoxication. Finally, the very flexible regulation enabled accurate control over the expression induction, although we used a repressible promoter. Summing up, the continuous online regulation of gene expression has the potential to synchronize gene expression levels to optimize metabolic flux, artificial enzyme cascades, growth rates for co cultivations and many other applications addicted to complex expression regulation. The developed light-regulated expression platform represents an innovative screening approach to find optimization potential for production processes.Keywords: caged-compounds, gene expression regulation, optogenetics, photo-labile protecting group
Procedia PDF Downloads 329233 Groundwater Quality Assessment in the Vicinity of Tannery Industries in Warangal, India
Authors: Mohammed Fathima Shahanaaz, Shaik Fayazuddin, M. Uday Kiran
Abstract:
Groundwater quality is deteriorating day by day in different parts of the world due to various reasons, toxic chemicals are being discharged without proper treatment into inland water bodies and land which in turn add pollutants to the groundwater. In this kind of situation, the rural communities which do not have municipal drinking water have to rely on groundwater though it is polluted for various uses. Tannery industry is one of the major industry which provides economy and employment to India. Since most of the developed countries stopped using chemicals which are toxic, the tanning industry which uses chromium as its major element are being shifted towards developing countries. Most of the tanning industries in India can be found in clusters concentrated mainly in states of Tamilnadu, West Bengal, Uttar Pradesh and limited places of Punjab. Limited work is present in the case of tanneries of Warangal. There exists 18 group of tanneries in Desaipet, Enamamula region of Warangal, out of which 4 are involved in dry process and are low responsible for groundwater pollution. These units of tanneries are discharging their effluents after treatment into Sai Cheruvu. Though the treatment effluents are being discharged, the Sai Cheruvu is turned in to Pink colour, with higher levels of BOD, COD, chromium, chlorides, total hardness, TDS and sulphates. An attempt was made to analyse the groundwater samples around this polluted Sai Cheruvu region since literature shows that a single tannery can pollute groundwater to a radius of 7-8 kms from the point of disposal. Sample are collected from 6 different locations around Sai Cheruvu. Analysis was performed for determining various constituents in groundwater such as pH, EC, TDS, TH, Ca+2, Mg+2, HCO3-, Na+, K+, Cl-, SO42-, NO3-, F and Cr+6. The analysis of these constitutes gave values greater than permissible limits. Even chromium is also present in groundwater samples which is exceeding permissible limits People in Paidepally and Sardharpeta villages already stopped the usage of groundwater. They are buying bottle water for drinking purpose. Though they are not using groundwater for drinking purpose complaints are made about using this water for washing also. So treatment process should be adopted for groundwater which should be simple and efficient. In this study rice husk silica (RHS) is used to treat pollutants in groundwater with varying dosages of RHS and contact time. Rice husk is treated, dried and place in a muffle furnace for 6 hours at 650°C. Reduction is observed in total hardness, chlorides and chromium levels are observed after the application RHS. Pollutants reached permissible limits for 27.5mg/l and 50 mg/l of dosage for a contact time of 130 min at constant pH and temperature.Keywords: chromium, groundwater, rice husk silica, tanning industries
Procedia PDF Downloads 202232 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature
Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi
Abstract:
The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.Keywords: hardness, powder metallurgy, spark plasma sintering, wear
Procedia PDF Downloads 275231 Sprinting Beyond Sexism and Gender Stereotypes: Indian Women Fans' Experiences in the Sports Fandom
Authors: Siddhi Deshpande, Jo Jo Chacko Eapen
Abstract:
Despite almost half of India’s female population engages in watching sports, their experiences in the sports fandom are concealed by ‘traditional masculinity,’ leading to potential exclusion and harassment. To explore these experiences in-depth, this qualitative study aims to understand what coping strategies Indian women fans employ, to sustain their team identification. Employing criterion sampling, participants were screened using The Sports Spectators Identification Scale (SSIS) to assess team identification and a Brief Sexism Questionnaire to confirm participants’ experience with sexism as it aligns with the purpose of the study. The participants were Indian women who had been following any sport for more than eight years, were fluent in English, and were not professionals in Sports. Ten highly identified fans with gendered experiences were recruited for one-on-one semi-structured, in-depth interviews. The data was analyzed using Interpretive Phenomenological Analysis (IPA) to understand the lived-in experiences of women fans experiencing sexism and gender stereotypes, revealing superordinate themes of (1) Ontogenesis and Emotional Investment; (2) Gendered Expectations and Sexism; (3) Coping Strategies and Resilience; (4) Identity, Femininity, Empowerment; (5) Advocacy for Equality and Inclusivity. The findings reflect that Indian women fans experience social exclusion, harassment, sexualization, and commodification, in both online and offline fandoms, where they are disproportionately targeted with threats, misogynistic comments, and attraction-based assumptions, questioning their ‘authenticity’ as fans due to their gender. Women fans interchange between proactive strategies of assertiveness, humor, and knowledge demonstration with defensive strategies of selective engagement, self-regulatory censorship, and desensitization to deal with sexism. In this interplay, the integration of women’s ‘fan identity’ with their self-concept showcases how being a sports fan adds meaning to their lives, despite the constant scrutiny in a male-dominated space, reflecting that femininity and sports should coexist. As a result, they find refuge in female fan communities due to their similar experiences in the fandom and advocate for an equal and inclusive environment where sports are above gender, and not the other way around. A key practical implication of this research is enabling sports organizations to develop inclusive fan engagement policies that actively encourage female fan participation. This includes sensitizing stadium staff and security personnel, promoting gender-neutral language, and, most importantly, establishing safety protocols to protect female fans from adverse experiences in the fandom.Keywords: coping strategies, female sports fans, femininity, gendered experiences, team identification
Procedia PDF Downloads 60230 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution
Procedia PDF Downloads 102229 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference
Authors: Nasser S. Shebka
Abstract:
Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities
Procedia PDF Downloads 93228 Convention Refugees in New Zealand: Being Trapped in Immigration Limbo without the Right to Obtain a Visa
Authors: Saska Alexandria Hayes
Abstract:
Multiple Convention Refugees in New Zealand are stuck in a state of immigration limbo due to a lack of defined immigration policies. The Refugee Convention of 1951 does not give the right to be issued a permanent right to live and work in the country of asylum. A gap in New Zealand's immigration law and policy has left Convention Refugees without the right to obtain a resident or temporary entry visa. The significant lack of literature on this topic suggests that the lack of visa options for Convention Refugees in New Zealand is a widely unknown or unacknowledged issue. Refugees in New Zealand enjoy the right of non-refoulement contained in Article 33 of the Refugee Convention 1951, whether lawful or unlawful. However, a number of rights contained in the Refugee Convention 1951, such as the right to gainful employment and social security, are limited to refugees who maintain lawful immigration status. If a Convention Refugee is denied a resident visa, the only temporary entry visa a Convention Refugee can apply for in New Zealand is discretionary. The appeal cases heard at the Immigration Protection Tribunal establish that Immigration New Zealand has declined resident and discretionary temporary entry visa applications by Convention Refugees for failing to meet the health or character immigration instructions. The inability of a Convention Refugee to gain residency in New Zealand creates a dependence on the issue of discretionary temporary entry visas to maintain lawful status. The appeal cases record that this reliance has led to Convention Refugees' lawful immigration status being in question, temporarily depriving them of the rights contained in the Refugee Convention 1951 of lawful refugees. In one case, the process of applying for a discretionary temporary entry visa led to a lawful Convention Refugee being temporarily deprived of the right to social security, breaching Article 24 of the Refugee Convention 1951. The judiciary has stated a constant reliance on the issue of discretionary temporary entry visas for Convention Refugees can lead to a breach of New Zealand's international obligations under Article 7 of the International Covenant on Civil and Political Rights. The appeal cases suggest that, despite successful judicial proceedings, at least three persons have been made to rely on the issue of discretionary temporary entry visas potentially indefinitely. The appeal cases establish that a Convention Refugee can be denied a discretionary temporary entry visa and become unlawful. Unlawful status could ultimately breach New Zealand's obligations under Article 33 of the Refugee Convention 1951 as it would procedurally deny Convention Refugees asylum. It would force them to choose between the right of non-refoulement or leaving New Zealand to seek the ability to access all the human rights contained in the Universal Declaration of Human Rights elsewhere. This paper discusses how the current system has given rise to these breaches and emphasizes a need to create a designated temporary entry visa category for Convention Refugees.Keywords: domestic policy, immigration, migration, New Zealand
Procedia PDF Downloads 104227 Change of Substrate in Solid State Fermentation Can Produce Proteases and Phytases with Extremely Distinct Biochemical Characteristics and Promising Applications for Animal Nutrition
Authors: Paula K. Novelli, Margarida M. Barros, Luciana F. Flueri
Abstract:
Utilization of agricultural by-products, wheat ban and soybean bran, as substrate for solid state fermentation (SSF) was studied, aiming the achievement of different enzymes from Aspergillus sp. with distinct biological characteristics and its application and improvement on animal nutrition. Aspergillus niger and Aspergillus oryzea were studied as they showed very high yield of phytase and protease production, respectively. Phytase activity was measure using p-nitrophenilphosphate as substrate and a standard curve of p-nitrophenol, as the enzymatic activity unit was the quantity of enzyme necessary to release one μmol of p-nitrophenol. Protease activity was measure using azocasein as substrate. Activity for phytase and protease substantially increased when the different biochemical characteristics were considered in the study. Optimum pH and stability of the phytase produced by A. niger with wheat bran as substrate was between 4.0 - 5.0 and optimum temperature of activity was 37oC. Phytase fermented in soybean bran showed constant values at all pHs studied, for optimal and stability, but low production. Phytase with both substrates showed stable activity for temperatures higher than 80oC. Protease from A. niger showed very distinct behavior of optimum pH, acid for wheat bran and basic for soybean bran, respectively and optimal values of temperature and stability at 50oC. Phytase produced by A. oryzae in wheat bran had optimum pH and temperature of 9 and 37oC, respectively, but it was very unstable. On the other hand, proteases were stable at high temperatures, all pH’s studied and showed very high yield when fermented in wheat bran, however when it was fermented in soybean bran the production was very low. Subsequently the upscale production of phytase from A. niger and proteases from A. oryzae were applied as an enzyme additive in fish fed for digestibility studies. Phytases and proteases were produced with stable enzyme activity of 7,000 U.g-1 and 2,500 U.g-1, respectively. When those enzymes were applied in a plant protein based fish diet for digestibility studies, they increased protein, mineral, energy and lipids availability, showing that these new enzymes can improve animal production and performance. In conclusion, the substrate, as well as, the microorganism species can affect the biochemical character of the enzyme produced. Moreover, the production of these enzymes by SSF can be up to 90% cheaper than commercial ones produced with the same fungi species but submerged fermentation. Add to that these cheap enzymes can be easily applied as animal diet additives to improve production and performance.Keywords: agricultural by-products, animal nutrition, enzymes production, solid state fermentation
Procedia PDF Downloads 326226 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290225 Optimization of the Feedstock Supply of an Oilseeds Conversion Unit for Biofuel Production in West Africa: A Comparative Study of the Supply of Jatropha curcas and Balanites aegyptiaca Seeds
Authors: Linda D. F. Bambara, Marie Sawadogo
Abstract:
Jatropha curcas (jatropha) is the plant that has been the most studied for biofuel production in West Africa. There exist however other plants such as Balanites aegyptiaca (balanites) that have been targeted as a potential feedstock for biofuel production. This biomass could be an alternative feedstock for the production of straight vegetable oil (SVO) at costs lower than jatropha-based SVO production costs. This study aims firstly to determine, through an MILP model, the optimal organization that minimizes the costs of the oilseeds supply of two biomass conversion units (BCU) exploiting respectively jatropha seeds and the balanitès seeds. Secondly, the study aims to carry out a comparative study of these costs obtained for each BCU. The model was then implemented on two theoretical cases studies built on the basis of the common practices in Burkina Faso and two scenarios were carried out for each case study. In Scenario 1, 3 pre-processing locations ("at the harvesting area", "at the gathering points", "at the BCU") are possible. In scenario 2, only one location ("at the BCU") is possible. For each biomass, the system studied is the upstream supply chain (harvesting, transport and pre-processing (drying, dehulling, depulping)), including cultivation (for jatropha). The model optimizes the area of land to be exploited based on the productivity of the studied plants and material losses that may occur during the harvesting and the supply of the BCU. It then defines the configuration of the logistics network allowing an optimal supply of the BCU taking into account the most common means of transport in West African rural areas. For the two scenarios, the results of the implementation showed that the total area exploited for balanites (1807 ha) is 4.7 times greater than the total area exploited for Jatropha (381 ha). In both case studies, the location of pre-processing “at the harvesting area” was always chosen for scenario1. As the balanites trees were not planted and because the first harvest of the jatropha seeds took place 4 years after planting, the cost price of the seeds at the BCU without the pre-processing costs was about 430 XOF/kg. This cost is 3 times higher than the balanites's one, which is 140 XOF/kg. After the first year of harvest, i.e. 5 years after planting, and assuming that the yield remains constant, the same cost price is about 200 XOF/kg for Jatropha. This cost is still 1.4 times greater than the balanites's one. The transport cost of the balanites seeds is about 120 XOF/kg. This cost is similar for the jatropha seeds. However, when the pre-processing is located at the BCU, i.e. for scenario2, the transport costs of the balanites seeds is 1200 XOF/kg. These costs are 6 times greater than the transport costs of jatropha which is 200 XOF/kg. These results show that the cost price of the balanites seeds at the BCU can be competitive compared to the jatropha's one if the pre-processing is located at the harvesting area.Keywords: Balanites aegyptiaca, biomass conversion, Jatropha curcas, optimization, post-harvest operations
Procedia PDF Downloads 338224 New Teaching Tools for a Modern Representation of Chemical Bond in the Course of Food Science
Authors: Nicola G. G. Cecca
Abstract:
In Italian IPSSEOAs, high schools that give a vocational education to students that will work in the field of Enogastronomy and Hotel Management, the course of Food Science allows the students to start and see food as a mixture of substances that they will transform during their profession. These substances are characterized not only by a chemical composition but also by a molecular structure that makes them nutritionally active. But the increasing number of new products proposed by Food Industry, the modern techniques of production and transformation, the innovative preparations required by customers have made many information reported in the most wide spread Food Science textbooks not up-to-date or too poor for the people who will work in catering sector. Often Authors offer information aged to Bohr’s Atomic Model and to the ‘Octet Rule’ proposed by G.N. Lewis to describe the Chemical Bond, without giving any reference to new as Orbital Atomic Model and Molecular Orbital Theory that, in the meantime, start to be old themselves. Furthermore, this antiquated information precludes an easy understanding of a wide range of properties of nutritive substances and many reactions in which the food constituents are involved. In this paper, our attention is pointed out to use GEOMAG™ to represent the dynamics with which the chemical bond is formed during the synthesis of the molecules. GEOMAG™ is a toy, produced by the Swiss Company Geomagword S.A., pointed to stimulate in children, aged between 6-10 years, their fantasy and their handling ability and constituted by metallic spheres and metallic magnetic bars coated by coloured plastic materials. The simulation carried out with GEOMAG™ is based on the similitude existing between the Coulomb’s force and the magnetic attraction’s force and in particular between the formulae with which they are calculated. The electrostatic force (F in Newton) that allows the formation of the chemical bond can be calculated by mean Fc = kc q1 q2/d2 where: q1 e q2 are the charge of particles [in Coulomb], d is the distance between the particles [in meters] and kc is the Coulomb’s constant. It is surprising to observe that the attraction’s force (Fm) acting between the magnetic extremities of GEOMAG™ used to simulate the chemical bond can be calculated in the same way by using the formula Fm = km m1 m2/d2 where: m1 e m2 represent the strength of the poles [A•m], d is the distance between the particles [m], km = μ/4π in which μ is the magnetic permeability of medium [N•A-2]. The magnetic attraction can be tested by students by trying to keep the magnetic elements of GEOMAG™ separate by hands or trying to measure by mean an appropriate dynamometric system. Furthermore, by using a dynamometric system to measure the magnetic attraction between the GEOMAG™ elements is possible draw a graphic F=f(d) to verify that the curve obtained during the simulation is very similar to that one hypnotized, around the 1920’s by Linus Pauling to describe the formation of H2+ in according with Molecular Orbital Theory.Keywords: chemical bond, molecular orbital theory, magnetic attraction force, GEOMAG™
Procedia PDF Downloads 271223 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory
Authors: Diom Loreen Ndum, Omarine Njimanted
Abstract:
With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules
Procedia PDF Downloads 79222 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie
Authors: Paulo Da Silva Quadros
Abstract:
This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading
Procedia PDF Downloads 169221 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 100