Search results for: product service systems
639 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study
Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle
Abstract:
The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area
Procedia PDF Downloads 119638 The Impact of Artificial Intelligence on Food Industry
Authors: George Hanna Abdelmelek Henien
Abstract:
Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos
Procedia PDF Downloads 66637 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 70636 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 378635 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 134634 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic
Authors: Hanan Al-Jabri
Abstract:
Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting
Procedia PDF Downloads 162633 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 92632 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces
Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang
Abstract:
Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide
Procedia PDF Downloads 435631 Effect of Antimony on Microorganisms in Aerobic and Anaerobic Environments
Authors: Barrera C. Monserrat, Sierra-Alvarez Reyes, Pat-Espadas Aurora, Moreno Andrade Ivan
Abstract:
Antimony is a toxic and carcinogenic metalloid considered a pollutant of priority interest by the United States Environmental Protection Agency. It is present in the environment in two oxidation states: antimonite (Sb (III)) and antimony (Sb (V)). Sb (III) is toxic to several aquatic organisms, but the potential inhibitory effect of Sb species for microorganisms has not been extensively evaluated. The fate and possible toxic impact of antimony on aerobic and anaerobic wastewater treatment systems are unknown. For this reason, the objective of this study was to evaluate the microbial toxicity of Sb (V) and Sb (III) in aerobic and anaerobic environments. Sb(V) and Sb(III) were used as potassium hexahydroxoantimonate (V) and potassium antimony tartrate, respectively (Sigma-Aldrich). The toxic effect of both Sb species in anaerobic environments was evaluated on methanogenic activity and the inhibition of hydrogen production of microorganisms from a wastewater treatment bioreactor. For the methanogenic activity, batch experiments were carried out in 160 mL serological bottles; each bottle contained basal mineral medium (100 mL), inoculum (1.5 g of VSS/L), acetate (2.56 g/L) as substrate, and variable concentrations of Sb (V) or Sb (III). Duplicate bioassays were incubated at 30 ± 2°C on an orbital shaker (105 rpm) in the dark. Methane production was monitored by gas chromatography. The hydrogen production inhibition tests were carried out in glass bottles with a working volume of 0.36 L. Glucose (50 g/L) was used as a substrate, pretreated inoculum (5 g VSS/L), mineral medium and varying concentrations of the two species of antimony. The bottles were kept under stirring and at a temperature of 35°C in an AMPTSII device that recorded hydrogen production. The toxicity of Sb on aerobic microorganisms (from a wastewater activated sludge treatment plant) was tested with a Microtox standardized toxicity test and respirometry. Results showed that Sb (III) is more toxic than Sb (V) for methanogenic microorganisms. Sb (V) caused a 50% decrease in methanogenic activity at 250 mg/L. In contrast, exposure to Sb (III) resulted in a 50% inhibition at a concentration of only 11 mg/L, and an almost complete inhibition (95%) at 25 mg/L. For hydrogen-producing microorganisms, Sb (III) and Sb (V) inhibited 50% of this production with 12.6 mg/L and 87.7 mg/L, respectively. The results for aerobic environments showed that 500 mg/L of Sb (V) do not inhibit the Allivibrio fischeri (Microtox) activity or specific oxygen uptake rate of activated sludge. In the case of Sb (III), this caused a loss of 50% of the respiration of the microorganisms at concentrations below 40 mg/L. The results obtained indicate that the toxicity of the antimony will depend on the speciation of this metalloid and that Sb (III) has a significantly higher inhibitory potential compared to Sb (V). It was shown that anaerobic microorganisms can reduce Sb (V) to Sb (III). Acknowledgments: This work was funded in part by grants from the UA-CONACYT Binational Consortium for the Regional Scientific Development and Innovation (CAZMEX), the National Institute of Health (NIH ES- 04940), and PAPIIT-DGAPA-UNAM (IN105220).Keywords: aerobic inhibition, antimony reduction, hydrogen inhibition, methanogenic toxicity
Procedia PDF Downloads 168630 Atypical Retinoid ST1926 Nanoparticle Formulation Development and Therapeutic Potential in Colorectal Cancer
Authors: Sara Assi, Berthe Hayar, Claudio Pisano, Nadine Darwiche, Walid Saad
Abstract:
Nanomedicine, the application of nanotechnology to medicine, is an emerging discipline that has gained significant attention in recent years. Current breakthroughs in nanomedicine have paved the way to develop effective drug delivery systems that can be used to target cancer. The use of nanotechnology provides effective drug delivery, enhanced stability, bioavailability, and permeability, thereby minimizing drug dosage and toxicity. As such, the use of nanoparticle (NP) formulations in drug delivery has been applied in various cancer models and have shown to improve the ability of drugs to reach specific targeted sites in a controlled manner. Cancer is one of the major causes of death worldwide; in particular, colorectal cancer (CRC) is the third most common type of cancer diagnosed amongst men and women and the second leading cause of cancer related deaths, highlighting the need for novel therapies. Retinoids, consisting of natural and synthetic derivatives, are a class of chemical compounds that have shown promise in preclinical and clinical cancer settings. However, retinoids are limited by their toxicity and resistance to treatment. To overcome this resistance, various synthetic retinoids have been developed, including the adamantyl retinoid ST1926, which is a potent anti-cancer agent. However, due to its limited bioavailability, the development of ST1926 has been restricted in phase I clinical trials. We have previously investigated the preclinical efficacy of ST1926 in CRC models. ST1926 displayed potent inhibitory and apoptotic effects in CRC cell lines by inducing early DNA damage and apoptosis. ST1926 significantly reduced the tumor doubling time and tumor burden in a xenograft CRC model. Therefore, we developed ST1926-NPs and assessed their efficacy in CRC models. ST1926-NPs were produced using Flash NanoPrecipitation with the amphiphilic diblock copolymer polystyrene-b-ethylene oxide and cholesterol as a co-stabilizer. ST1926 was formulated into NPs with a drug to polymer mass ratio of 1:2, providing a stable formulation for one week. The contin ST1926-NP diameter was 100 nm, with a polydispersity index of 0.245. Using the MTT cell viability assay, ST1926-NP exhibited potent anti-growth activities as naked ST1926 in HCT116 cells, at pharmacologically achievable concentrations. Future studies will be performed to study the anti-tumor activities and mechanism of action of ST1926-NPs in a xenograft mouse model and to detect the compound and its glucuroconjugated form in the plasma of mice. Ultimately, our studies will support the use of ST1926-NP formulations in enhancing the stability and bioavailability of ST1926 in CRC.Keywords: nanoparticles, drug delivery, colorectal cancer, retinoids
Procedia PDF Downloads 101629 Electret: A Solution of Partial Discharge in High Voltage Applications
Authors: Farhina Haque, Chanyeop Park
Abstract:
The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.Keywords: electrets, high power density, partial discharge, triode corona discharge
Procedia PDF Downloads 203628 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution
Authors: Peter G. Hollis, Kim G. Clarke
Abstract:
Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag
Procedia PDF Downloads 266627 Possibility of Membrane Filtration to Treatment of Effluent from Digestate
Authors: Marcin Debowski, Marcin Zielinski, Magdalena Zielinska, Paulina Rusanowska
Abstract:
The problem with digestate management is one of the most important factors influencing on the development and operation of biogas plant. Turbidity and bacterial contamination negatively affect the growth of algae, which can limit the use of the effluent in the production of algae biomass on a large scale. These problems can be overcome by cultivating of algae species resistant to environmental factors, such as Chlorella sp., Scenedesmus sp., or reducing load of organic compounds to prevent bacterial contamination. The effluent requires dilution and/or purification. One of the methods of effluent treatment is the use of a membrane technology such as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO), depending on the membrane pore size and the cut off point. Membranes are a physical barrier to solids and particles larger than the size of the pores. MF membranes have the largest pores and are used to remove turbidity, suspensions, bacteria and some viruses. UF membranes remove also color, odor and organic compounds with high molecular weight. In treatment of wastewater or other waste streams, MF and UF can provide a sufficient degree of purification. NF membranes are used to remove natural organic matter from waters, water disinfection products and sulfates. RO membranes are applied to remove monovalent ions such as Na⁺ or K⁺. The effluent was used in UF for medium to cultivation of two microalgae: Chlorella sp. and Phaeodactylum tricornutum. Growth rates of Chlorella sp. and P. tricornutum were similar: 0.216 d⁻¹ and 0.200 d⁻¹ (Chlorella sp.); 0.128 d⁻¹ and 0.126 d⁻¹ (P. tricornutum), on synthetic medium and permeate from UF, respectively. The final biomass composition was also similar, regardless of the medium. Removal of nitrogen was 92% and 71% by Chlorella sp. and P. tricornutum, respectively. The fermentation effluents after UF and dilution were also used for cultivation of algae Scenedesmus sp. that is resistant to environmental conditions. The authors recommended the development of biorafinery based on the production of algae for the biogas production. There are examples of using a multi-stage membrane system to purify the liquid fraction from digestate. After the initial UF, RO is used to remove ammonium nitrogen and COD. To obtain a permeate with a concentration of ammonium nitrogen allowing to discharge it into the environment, it was necessary to apply three-stage RO. The composition of the permeate after two-stage RO was: COD 50–60 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 300–320 mg/dm³, total nitrogen 320–340 mg/dm³, total phosphorus 53 mg/dm³. However compostion of permeate after three-stage RO was: COD < 5 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 0 mg/dm³, total nitrogen 3.5 mg/dm³, total phosphorus < 0,05 mg/dm³. Last stage of RO might be replaced by ion exchange process. The negative aspect of membrane filtration systems is the fact that the permeate is about 50% of the introduced volume, the remainder is the retentate. The management of a retentate might involve recirculation to a biogas plant.Keywords: digestate, membrane filtration, microalgae cultivation, Chlorella sp.
Procedia PDF Downloads 353626 Phase Synchronization of Skin Blood Flow Oscillations under Deep Controlled Breathing in Human
Authors: Arina V. Tankanag, Gennady V. Krasnikov, Nikolai K. Chemeris
Abstract:
The development of respiration-dependent oscillations in the peripheral blood flow may occur by at least two mechanisms. The first mechanism is related to the change of venous pressure due to mechanical activity of lungs. This phenomenon is known as ‘respiratory pump’ and is one of the mechanisms of venous return of blood from the peripheral vessels to the heart. The second mechanism is related to the vasomotor reflexes controlled by the respiratory modulation of the activity of centers of the vegetative nervous system. Early high phase synchronization of respiration-dependent blood flow oscillations of left and right forearm skin in healthy volunteers at rest was shown. The aim of the work was to study the effect of deep controlled breathing on the phase synchronization of skin blood flow oscillations. 29 normotensive non-smoking young women (18-25 years old) of the normal constitution without diagnosed pathologies of skin, cardiovascular and respiratory systems participated in the study. For each of the participants six recording sessions were carried out: first, at the spontaneous breathing rate; and the next five, in the regimes of controlled breathing with fixed breathing depth and different rates of enforced breathing regime. The following rates of controlled breathing regime were used: 0.25, 0.16, 0.10, 0.07 and 0.05 Hz. The breathing depth amounted to 40% of the maximal chest excursion. Blood perfusion was registered by laser flowmeter LAKK-02 (LAZMA, Russia) with two identical channels (wavelength 0.63 µm; emission power, 0.5 mW). The first probe was fastened to the palmar surface of the distal phalanx of left forefinger; the second probe was attached to the external surface of the left forearm near the wrist joint. These skin zones were chosen as zones with different dominant mechanisms of vascular tonus regulation. The degree of phase synchronization of the registered signals was estimated from the value of the wavelet phase coherence. The duration of all recording was 5 min. The sampling frequency of the signals was 16 Hz. The increasing of synchronization of the respiratory-dependent skin blood flow oscillations for all controlled breathing regimes was obtained. Since the formation of respiration-dependent oscillations in the peripheral blood flow is mainly caused by the respiratory modulation of system blood pressure, the observed effects are most likely dependent on the breathing depth. It should be noted that with spontaneous breathing depth does not exceed 15% of the maximal chest excursion, while in the present study the breathing depth was 40%. Therefore it has been suggested that the observed significant increase of the phase synchronization of blood flow oscillations in our conditions is primarily due to an increase of breathing depth. This is due to the enhancement of both potential mechanisms of respiratory oscillation generation: venous pressure and sympathetic modulation of vascular tone.Keywords: deep controlled breathing, peripheral blood flow oscillations, phase synchronization, wavelet phase coherence
Procedia PDF Downloads 213625 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 135624 Fluctuations in Radical Approaches to State Ownership of the Means of Production Over the Twentieth Century
Authors: Tom Turner
Abstract:
The recent financial crisis in 2008 and the growing inequality in developed industrial societies would appear to present significant challenges to capitalism and the free market. Yet there have been few substantial mainstream political or economic challenges to the dominant capitalist and market paradigm to-date. There is no dearth of critical and theoretical (academic) analyses regarding the prevailing systems failures. Yet despite the growing inequality in the developed industrial societies and the financial crisis in 2008 few commentators have advocated the comprehensive socialization or state ownership of the means of production to our knowledge – a core principle of radical Marxism in the 19th and early part of the 20th century. Undoubtedly the experience in the Soviet Union and satellite countries in the 20th century has cast a dark shadow over the notion of centrally controlled economies and state ownership of the means of production. In this paper, we explore the history of a doctrine advocating the socialization or state ownership of the means of production that was central to Marxism and socialism generally. Indeed this doctrine provoked an intense and often acrimonious debate especially for left-wing parties throughout the 20th century. The debate within the political economy tradition has historically tended to divide into a radical and a revisionist approach to changing or reforming capitalism. The radical perspective views the conflict of interest between capital and labor as a persistent and insoluble feature of a capitalist society and advocates the public or state ownership of the means of production. Alternatively, the revisionist perspective focuses on issues of distribution rather than production and emphasizes the possibility of compromise between capital and labor in capitalist societies. Over the 20th century, the radical perspective has faded and even the social democratic revisionist tradition has declined in recent years. We conclude with the major challenges that confront both the radical and revisionist perspectives in the development of viable policy agendas in mature developed democratic societies. Additionally, we consider whether state ownership of the means of production still has relevance in the 21st century and to what extent state ownership is off the agenda as a political issue in the political mainstream in developed industrial societies. A central argument in the paper is that state ownership of the means of production is unlikely to feature as either a practical or theoretical solution to the problems of capitalism post the financial crisis among mainstream political parties of the left. Although the focus here is solely on the shifting views of the radical and revisionist socialist perspectives in the western European tradition the analysis has relevance for the wider socialist movement.Keywords: sate ownership, ownership means of production, radicals, revisionists
Procedia PDF Downloads 122623 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 76622 Case-Based Options Counseling Panel To Supplement An Indiana Medical School’s Pre-Clinical Family Planning and Abortion Education Curriculum
Authors: Alexandra McKinzie, Lucy Brown, Sarah Komanapalli, Sarah Swiezy, Caitlin Bernard
Abstract:
Background: While 25% of US women will seek an abortion before age 45, targeted laws have led to a decline in abortion clinics, subsequently leaving 96% of Indiana counties and the 70% of Hoosier women residing in these counties without access to services they desperately need.1,2 Despite the need for a physician workforce that is educated and able to provide full-spectrum reproductive health care, few medical institutions have a standardized family planning and abortion pre-clinical curriculum. Methods: A Qualtrics survey was disseminated to students from Indiana University School of Medicine (IUSM) to evaluate (1) student interest in curriculum reform, (2) self-assessed preparedness to counsel on contraceptive and pregnancy options, and (3) preferred modality of instruction for family planning and abortion topics. Based on the pre-panel survey feedback, a case-based pregnancy options counseling panel will be implemented in the students’ pre-clinical, didactic course Endocrine, Reproductive, Musculoskeletal, Dermatologic Systems (ERMD) in February 2022. A Qualtrics post-panel survey will be disseminated to evaluate students’ perceived efficacy and quality of the panel, as well as their self-assessed preparedness to counsel on pregnancy options. Results: Participants in the pre-panel survey (n=303) were primarily female (61.72%) and White (74.43%). Across all class levels, many (60.80%) students expected to learn about family planning and abortion in their pre-clinical education. While most (84-88%) participants felt prepared to counsel about common, non-controversial pharmacotherapies (e.g. beta-blockers and diuretics), only 20% of students felt prepared to counsel on abortion options. Overall, 85.67% of students believed that IUSM should enhance its reproductive health coverage in pre-clinical, didactic courses. Traditional lectures, panels, and direct clinical exposure were the most popular instructional modalities. Expected Results: The authors predict that following the panel, students will indicate improved confidence in providing pregnancy options counseling. Additionally, students will provide constructive feedback on the structure and content of the panel for incorporation into future years’ curriculum. Conclusions: IUSM students overwhelmingly expressed interest in expanding their pre-clinical curriculum’s coverage of family planning and abortion topics. To specifically improve students’ self-assessed preparedness to provide pregnancy options counseling and address students’ self-cited learning gaps, a case-based provider panel session will be implemented in response to students’ preferred modality feedback.Keywords: options counseling, family planning, abortion, curriculum reform, case-based panel
Procedia PDF Downloads 146621 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada
Authors: Amy Buckley
Abstract:
The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism
Procedia PDF Downloads 79620 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 254619 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 110618 Recognition by the Voice and Speech Features of the Emotional State of Children by Adults and Automatically
Authors: Elena E. Lyakso, Olga V. Frolova, Yuri N. Matveev, Aleksey S. Grigorev, Alexander S. Nikolaev, Viktor A. Gorodnyi
Abstract:
The study of the children’s emotional sphere depending on age and psychoneurological state is of great importance for the design of educational programs for children and their social adaptation. Atypical development may be accompanied by violations or specificities of the emotional sphere. To study characteristics of the emotional state reflection in the voice and speech features of children, the perceptual study with the participation of adults and the automatic recognition of speech were conducted. Speech of children with typical development (TD), with Down syndrome (DS), and with autism spectrum disorders (ASD) aged 6-12 years was recorded. To obtain emotional speech in children, model situations were created, including a dialogue between the child and the experimenter containing questions that can cause various emotional states in the child and playing with a standard set of toys. The questions and toys were selected, taking into account the child’s age, developmental characteristics, and speech skills. For the perceptual experiment by adults, test sequences containing speech material of 30 children: TD, DS, and ASD were created. The listeners were 100 adults (age 19.3 ± 2.3 years). The listeners were tasked with determining the children’s emotional state as “comfort – neutral – discomfort” while listening to the test material. Spectrographic analysis of speech signals was conducted. For automatic recognition of the emotional state, 6594 speech files containing speech material of children were prepared. Automatic recognition of three states, “comfort – neutral – discomfort,” was performed using automatically extracted from the set of acoustic features - the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) and the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS). The results showed that the emotional state is worse determined by the speech of TD children (comfort – 58% of correct answers, discomfort – 56%). Listeners better recognized discomfort in children with ASD and DS (78% of answers) than comfort (70% and 67%, respectively, for children with DS and ASD). The neutral state is better recognized by the speech of children with ASD (67%) than by the speech of children with DS (52%) and TD children (54%). According to the automatic recognition data using the acoustic feature set GeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.687; children with DS – 0.725; TD children – 0.641. When using the acoustic feature set eGeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.671; children with DS – 0.717; TD children – 0.631. The use of different models showed similar results, with better recognition of emotional states by the speech of children with DS than by the speech of children with ASD. The state of comfort is automatically determined better by the speech of TD children (precision – 0.546) and children with ASD (0.523), discomfort – children with DS (0.504). The data on the specificities of recognition by adults of the children’s emotional state by their speech may be used in recruitment for working with children with atypical development. Automatic recognition data can be used to create alternative communication systems and automatic human-computer interfaces for social-emotional learning. Acknowledgment: This work was financially supported by the Russian Science Foundation (project 18-18-00063).Keywords: autism spectrum disorders, automatic recognition of speech, child’s emotional speech, Down syndrome, perceptual experiment
Procedia PDF Downloads 190617 The Ephemeral Re-Use of Cultural Heritage: The Incorporation of the Festival Phenomenon Within Monuments and Archaeological Sites in Lebanon
Authors: Joe Kallas
Abstract:
It is now widely accepted that the preservation of cultural heritage must go beyond simple restoration and renovation actions. While some historic monuments have been preserved for millennia, many of them, less important or simply neglected because of lack of money, have disappeared. As a result, the adaptation of monuments and archaeological sites to new functions allow them to 'survive'. Temporary activities or 'ephemeral' re-use, are increasingly recognized as a means of vitalization of deprived areas and enhancement of historic sites that became obsolete. They have the potential to increase economic and cultural value while making the best use of existing resources. However, there are often conservation and preservation issues related to the implementation of this type of re-use, which can also threaten the integrity and authenticity of archaeological sites and monuments if they have not been properly managed. This paper aims to get a better knowledge of the ephemeral re-use of heritage, and more specifically the subject of the incorporation of the festival phenomenon within the monuments and archaeological sites in Lebanon, a topic that is not yet studied enough. This paper tried to determine the elements that compose it, in order to analyze this phenomenon and to trace its good practices, by comparing international study cases to important national cases: the International Festival of Baalbek, the International Festival of Byblos and the International Festival of Beiteddine. Various factors have been studied and analyzed in order to best respond to the main problematic of this paper: 'How can we preserve the integrity of sites and monuments after the integration of an ephemeral function? And what are the preventive conservation measures to be taken when holding festivals in archaeological sites with fragile structures?' The impacts of the technical problems were first analyzed using various data and more particularly the effects of mass tourism, the integration of temporary installations, sound vibrations, the effects of unstudied lighting, until the mystification of heritage. Unfortunately, the DGA (General Direction of Antiquities in Lebanon) does not specify any frequency limit for the sound vibrations emitted by the speakers during musical festivals. In addition, there is no requirement from its part regarding the installations of the lighting systems in the historic monuments and no monitoring is done in situ, due to the lack of awareness of the impact that could be generated by such interventions, and due to the lack of materials and tools needed for the monitoring process. The study and analysis of the various data mentioned above led us to the elaboration of the main objective of this paper, which is the establishment of a list of recommendations. This list enables to define various preventive conservation measures to be taken during the holding of the festivals within the cultural heritage sites in Lebanon. We strongly hope that this paper will be an awareness document to start taking into consideration several factors previously neglected, in order to improve the conservation practices in the archaeological sites and monuments during the incorporation of the festival phenomenon.Keywords: archaeology, authenticity, conservation, cultural heritage, festival, historic sites, integrity, monuments, tourism
Procedia PDF Downloads 119616 Investigating Sediment-Bound Chemical Transport in an Eastern Mediterranean Perennial Stream to Identify Priority Pollution Sources on a Catchment Scale
Authors: Felicia Orah Rein Moshe
Abstract:
Soil erosion has become a priority global concern, impairing water quality and degrading ecosystem services. In Mediterranean climates, following a long dry period, the onset of rain occurs when agricultural soils are often bare and most vulnerable to erosion. Early storms transport sediments and sediment-bound pollutants into streams, along with dissolved chemicals. This results in loss of valuable topsoil, water quality degradation, and potentially expensive dredged-material disposal costs. Information on the provenance of fine sediment and priority sources of adsorbed pollutants represents a critical need for developing effective control strategies aimed at source reduction. Modifying sediment traps designed for marine systems, this study tested a cost-effective method to collect suspended sediments on a catchment scale to characterize stream water quality during first-flush storm events in a flashy Eastern Mediterranean coastal perennial stream. This study investigated the Kishon Basin, deploying sediment traps in 23 locations, including 4 in the mainstream and one downstream in each of 19 tributaries, enabling the characterization of sediment as a vehicle for transporting chemicals. Further, it enabled direct comparison of sediment-bound pollutants transported during the first-flush winter storms of 2020 from each of 19 tributaries, allowing subsequent ecotoxicity ranking. Sediment samples were successfully captured in 22 locations. Pesticides, pharmaceuticals, nutrients, and metal concentrations were quantified, identifying a total of 50 pesticides, 15 pharmaceuticals, and 22 metals, with 16 pesticides and 3 pharmaceuticals found in all 23 locations, demonstrating the importance of this transport pathway. Heavy metals were detected in only one tributary, identifying an important watershed pollution source with immediate potential influence on long-term dredging costs. Simultaneous sediment sampling at first flush storms enabled clear identification of priority tributaries and their chemical contributions, advancing a new national watershed monitoring approach, facilitating strategic plan development based on source reduction, and advancing the goal of improving the farm-stream interface, conserving soil resources, and protecting water quality.Keywords: adsorbed pollution, dredged material, heavy metals, suspended sediment, water quality monitoring
Procedia PDF Downloads 109615 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513614 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects
Procedia PDF Downloads 273613 Comparative Performance of Retting Methods on Quality Jute Fibre Production and Water Pollution for Environmental Safety
Authors: A. K. M. Zakir Hossain, Faruk-Ul Islam, Muhammad Alamgir Chowdhury, Kazi Morshed Alam, Md. Rashidul Islam, Muhammad Humayun Kabir, Noshin Ara Tunazzina, Taufiqur Rahman, Md. Ashik Mia, Ashaduzzaman Sagar
Abstract:
The jute retting process is one of the key factors for the excellent jute fibre production as well as maintaining water quality. The traditional method of jute retting is time-consuming and hampers the fish cultivation by polluting the water body. Therefore, a low cost, time-saving, environment-friendly, and improved technique is essential for jute retting to overcome this problem. Thus the study was focused to compare the extent of water pollution and fibre quality of two retting systems, i.e., traditional retting practices over-improved retting method (macha retting) by assessing different physico-chemical and microbiological properties of water and fibre quality parameters. Water samples were collected from the top and bottom of the retting place at the early, mid, and final stages of retting from four districts of Bangladesh viz., Gaibandha, Kurigram, Lalmonirhat, and Rangpur. Different physico-chemical parameters of water samples viz., pH, dissolved oxygen (DO), conductivity (CD), total dissolved solids (TDS), hardness, calcium, magnesium, carbonate, bicarbonate, chloride, phosphorus and sulphur content were measured. Irrespective of locations, the DO of the final stage retting water samples was very low as compared to the mid and early stage, and the DO of traditional jute retting method was significantly lower than the improved macha method. The pH of the water samples was slightly more acidic in the traditional retting method than that of the improved macha method. Other physico-chemical parameters of the water sample were found higher in the traditional method over-improved macha retting in all the stages of retting. Bacterial species were isolated from the collected water samples following the dilution plate technique. Microbiological results revealed that water samples of improved macha method contained more bacterial species that are supposed to involve in jute retting as compared to water samples of the traditional retting method. The bacterial species were then identified by the sequencing of 16SrDNA. Most of the bacterial species identified belong to the genera Pseudomonas, Bacillus, Pectobacterium, and Stenotrophomonas. In addition, the tensile strength of the jute fibre was tested, and the results revealed that the improved macha method showed higher mechanical strength than the traditional method in most of the locations. The overall results indicate that the water and fibre quality were found better in the improved macha retting method than the traditional method. Therefore, a time-saving and cost-friendly improved macha retting method can be widely adopted for the jute retting process to get the quality jute fiber and to keep the environment clean and safe.Keywords: jute retting methods, physico-chemical parameters, retting microbes, tensile strength, water quality
Procedia PDF Downloads 158612 A Study on Unplanned Settlement in Kabul City
Authors: Samir Ranjbar, Nasrullah Istanekzai
Abstract:
According to a report published in The Guardian, Kabul, the capital city of Afghanistan is the fifth fastest growing city in the world, whose population has increased fourfold since 2001 from 1.2 million to 4.8 million people. The main reason for this increment is identified as the return of Afghans migrated during the civil war. In addition to the return of immigrants, a steep economic growth due to foreign assistance in last decade creating lots of job opportunities in Kabul resulted in the attraction of individuals from the neighboring provinces as well. However, the development of urban facilities such as water supply system, housing transportation and waste management systems has yet to catch up with this rapid increase in population. Since Kabul city has developed traditionally and municipal governance had very limited capacity to implement municipal bylaws. As an unwanted consequence of this growth 70% of Kabul citizens contributed to developing informal settlement for which we can say that around three million people living in informally settled areas, lacking the very vital social and physical infrastructures of livelihood. This research focuses on a region with 30 ha area and 2100 people residents in the center of Kabul city. A comprehensive land readjustment concept plan has been formulated for this area. Through this concept plan, physical and social infrastructure has been demonstrated and analyzed. Findings of this paper propose a solution for the problems of this unplanned area in Kabul which is readjusting of unplanned area by a self-supporting process. This process does not need governmental budget and can be applied by government, private sectors and landowner associations. Furthermore, by implementing the Land Readjustment process, conceptual plans can be built for unplanned areas, maximum facilities can be brought to the residents’ urban life, improve the environment for the users’ benefit, promote the culture and sense of cooperation, participation and coexistence in the mind of people, improving the transport system, improvement in economic status (the value of land increases due to infrastructure availability and land legalization). In addition to all these benefits for the public, we can raise the revenue of government by collecting the taxes from landowners. This process is implemented in most of countries of the world, it was implemented for the first time in Germany and after that in most cities of Japan as well, and is known as one of the effective processes for infrastructural development. To sum up, the notable characteristic of the Land readjustment process is that it works on the concept of mutual interest in which both landowners and the government take advantage. However, in this process, the engagement of community is very important and without public cooperation, this process can face the failure.Keywords: land readjustment, informal settlement, Kabul, Afghanistan
Procedia PDF Downloads 254611 Clastic Sequence Stratigraphy of Late Jurassic to Early Cretaceous Formations of Jaisalmer Basin, Rajasthan
Authors: Himanshu Kumar Gupta
Abstract:
The Jaisalmer Basin is one of the parts of the Rajasthan basin in northwestern India. The presence of five major unconformities/hiatuses of varying span i.e. at the top of Archean basement, Cambrian, Jurassic, Cretaceous, and Eocene have created the foundation for constructing a sequence stratigraphic framework. Based on basin formative tectonic events and their impact on sedimentation processes three first-order sequences have been identified in Rajasthan Basin. These are Proterozoic-Early Cambrian rift sequence, Permian to Middle-Late Eocene shelf sequence and Pleistocene - Recent sequence related to Himalayan Orogeny. The Permian to Middle Eocene I order sequence is further subdivided into three-second order sequences i.e. Permian to Late Jurassic II order sequence, Early to Late Cretaceous II order sequence and Paleocene to Middle-Late Eocene II order sequence. In this study, Late Jurassic to Early Cretaceous sequence was identified and log-based interpretation of smaller order T-R cycles have been carried out. A log profile from eastern margin to western margin (up to Shahgarh depression) has been taken. The depositional environment penetrated by the wells interpreted from log signatures gave three major facies association. The blocky and coarsening upward (funnel shape), the blocky and fining upward (bell shape) and the erratic (zig-zag) facies representing distributary mouth bar, distributary channel and marine mud facies respectively. Late Jurassic Formation (Baisakhi-Bhadasar) and Early Cretaceous Formation (Pariwar) shows a lesser number of T-R cycles in shallower and higher number of T-R cycles in deeper bathymetry. Shallowest well has 3 T-R cycles in Baisakhi-Bhadasar and 2 T-R cycles in Pariwar, whereas deeper well has 4 T-R cycles in Baisakhi-Bhadasar and 8 T-R cycles in Pariwar Formation. The Maximum Flooding surfaces observed from the stratigraphy analysis indicate major shale break (high shale content). The study area is dominated by the alternation of shale and sand lithologies, which occurs in an approximate ratio of 70:30. A seismo-geological cross section has been prepared to understand the stratigraphic thickness variation and structural disposition of the strata. The formations are quite thick to the west, the thickness of which reduces as we traverse towards the east. The folded and the faulted strata indicated the compressional tectonics followed by the extensional tectonics. Our interpretation is supported with seismic up to second order sequence indicates - Late Jurassic sequence is a Highstand Systems Tract (Baisakhi - Bhadasar formations), and the Early Cretaceous sequence is Regressive to Lowstand System Tract (Pariwar Formation).Keywords: Jaisalmer Basin, sequence stratigraphy, system tract, T-R cycle
Procedia PDF Downloads 137610 Road Systems as Environmental Barriers: An Overview of Roadways in Their Function as Fences for Wildlife Movement
Authors: Rachael Bentley, Callahan Gergen, Brodie Thiede
Abstract:
Roadways have a significant impact on the environment in so far as they function as barriers to wildlife movement, both through road mortality and through resultant road avoidance. Roads have an im-mense presence worldwide, and it is predicted to increase substantially in the next thirty years. As roadways become even more common, it is important to consider their environmental impact, and to mitigate the negative effects which they have on wildlife and wildlife mobility. In a thorough analysis of several related studies, a common conclusion was that roads cause habitat fragmentation, which can lead split populations to evolve differently, for better or for worse. Though some populations adapted positively to roadways, becoming more resistant to road mortality, and more tolerant to noise and chemical contamination, many others experienced maladaptation, either due to chemical contamination in and around their environment, or because of genetic mutations from inbreeding when their population was fragmented too substantially to support a large enough group for healthy genetic exchange. Large mammals were especially susceptible to maladaptation from inbreed-ing, as they require larger areas to roam and therefore require even more space to sustain a healthy population. Regardless of whether a species evolved positively or negatively as a result of their proximity to a road, animals tended to avoid roads, making the genetic diversity from habitat fragmentation an exceedingly prevalent issue in the larger discussion of road ecology. Additionally, the consideration of solu-tions, such as overpasses and underpasses, is crucial to ensuring the long term survival of many wildlife populations. In studies addressing the effectiveness of overpasses and underpasses, it seemed as though animals adjusted well to these sorts of solutions, but strategic place-ment, as well as proper sizing, proper height, shelter from road noise, and other considerations were important in construction. When an underpass or overpass was well-built and well-shielded from human activity, animals’ usage of the structure increased significantly throughout its first five years, thus reconnecting previously divided populations. Still, these structures are costly and they are often unable to fully address certain issues such as light, noise, and contaminants from vehicles. Therefore, the need for further discussion of new, crea-tive solutions remains paramount. Roads are one of the most consistent and prominent features of today’s landscape, but their environmental impacts are largely overlooked. While roads are useful for connecting people, they divide landscapes and animal habitats. Therefore, further research and investment in possible solutions is necessary to mitigate the negative effects which roads have on wildlife mobility and to pre-vent issues from resultant habitat fragmentation.Keywords: fences, habitat fragmentation, roadways, wildlife mobility
Procedia PDF Downloads 181