Search results for: power structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13152

Search results for: power structure

1032 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 66
1031 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 163
1030 Theory of Apokatástasis - „in This Way, While Paying Attention to Their Knowledge and Wisdom, Nonetheless, They Did Not Ask God about These Matters, as to Whether or Not They Are True...“

Authors: Pikria Vardosanidze

Abstract:

The term Apokatástasis (Greek: Apokatástasis) is Greek and means "re-establishment", the universal resurrection. The term dates back to ancient times, in Stoic thought denoting the end of a constantly evolving cycle of the universe and the beginning of a new beginning, established in Christendom by the Eastern Fathers and Origen as the return of the entire created world to a state of goodness. "Universal resurrection" means the resurrection of mankind after the second coming of Jesus Christ. The first thing the Savior will do immediately upon His glorious coming will be that "the dead will be raised up first by Christ." God's animal action will apply to all the dead, but not with the same result. The action of God also applies to the living, which is accomplished by changing their bodies. The degree of glorification of the resurrected body will be commensurate with the spiritual life. An unclean body will not be glorified, and the soul will not be happy. He, as a resurrected body, will be unbelieving, strong, and spiritual, but because of the action of the passions, all this will only bring suffering to the body. The court judges both the soul and the flesh. At the same time, St. The letter nowhere says that at the last 4trial, someone will be able to change their own position. In connection with this dogmatic teaching, one of the greatest fathers of the Church, Sts. Gregory Nossell had a different view. He points out that the miracle of the resurrection is so glorious and sublime that it exceeds our faith. There are two important circumstances: one is the reality of the resurrection itself, and the other is the face of its fulfillment. The first is founded by Gregory Nossell on the Uado authority, Sts. In the letter: Jesus Christ preached about the resurrection of Christ and also foretold many other events, all of which were later fulfilled. Gregory Nossell clarifies the issues of the substantiality of good and evil and the relationship between them and notes that only good has an inherent dependence on nothing because it originated from nothing and exists eternally in God. As for evil, it has no self-sustaining substance and, therefore, no existence. It appears only through the free will of man from time to time. As St., The Father says that God is the supreme goodness that gives beings the power to exist in existence , all others who are without Him are non-existent. St. The above-mentioned opinion of the father about the universal apocatastasis comes from the thought of Origen. This teaching was introduced by the resolution of the Fifth World Ecclesiastical Assembly. Finally, it was unanimously stated by ecclesiastical figures that the doctrine of universal salvation is not valid. For if the resurrection takes place in this way, that is, all beings, including the evil spirit, are resurrected, then the worldly controversy between good and evil, the future common denominator, the eternal torment - all that Christian dogma acknowledges.

Keywords: apolatastasisi ortodox, orthodox doctrine, gregogory of nusse, eschatology

Procedia PDF Downloads 93
1029 Effectiveness Factor for Non-Catalytic Gas-Solid Pyrolysis Reaction for Biomass Pellet Under Power Law Kinetics

Authors: Haseen Siddiqui, Sanjay M. Mahajani

Abstract:

Various important reactions in chemical and metallurgical industries fall in the category of gas-solid reactions. These reactions can be categorized as catalytic and non-catalytic gas-solid reactions. In gas-solid reaction systems, heat and mass transfer limitations put an appreciable influence on the rate of the reaction. The consequences can be unavoidable for overlooking such effects while collecting the reaction rate data for the design of the reactor. Pyrolysis reaction comes in this category that involves the production of gases due to the interaction of heat and solid substance. Pyrolysis is also an important step in the gasification process and therefore, the gasification reactivity majorly influenced by the pyrolysis process that produces the char, as a feed for the gasification process. Therefore, in the present study, a non-isothermal transient 1-D model is developed for a single biomass pellet to investigate the effect of heat and mass transfer limitations on the rate of pyrolysis reaction. The obtained set of partial differential equations are firstly discretized using the concept of ‘method of lines’ to obtain a set of ordinary differential equation with respect to time. These equations are solved, then, using MATLAB ode solver ode15s. The model is capable of incorporating structural changes, porosity variation, variation in various thermal properties and various pellet shapes. The model is used to analyze the effectiveness factor for different values of Lewis number and heat of reaction (G factor). Lewis number includes the effect of thermal conductivity of the solid pellet. Higher the Lewis number, the higher will be the thermal conductivity of the solid. The effectiveness factor was found to be decreasing with decreasing Lewis number due to the fact that smaller Lewis numbers retard the rate of heat transfer inside the pellet owing to a lower rate of pyrolysis reaction. G factor includes the effect of the heat of reaction. Since the pyrolysis reaction is endothermic in nature, the G factor takes negative values. The more the negative value higher will be endothermic nature of the pyrolysis reaction. The effectiveness factor was found to be decreasing with more negative values of the G factor. This behavior can be attributed to the fact that more negative value of G factor would result in more energy consumption by the reaction owing to a larger temperature gradient inside the pellet. Further, the analytical expressions are also derived for gas and solid concentrations and effectiveness factor for two limiting cases of the general model developed. The two limiting cases of the model are categorized as the homogeneous model and unreacted shrinking core model.

Keywords: effectiveness factor, G-factor, homogeneous model, lewis number, non-catalytic, shrinking core model

Procedia PDF Downloads 117
1028 Carbon Capture and Storage Using Porous-Based Aerogel Materials

Authors: Rima Alfaraj, Abeer Alarawi, Murtadha AlTammar

Abstract:

The global energy landscape heavily relies on the oil and gas industry, which faces the critical challenge of reducing its carbon footprint. To address this issue, the integration of advanced materials like aerogels has emerged as a promising solution to enhance sustainability and environmental performance within the industry. This study thoroughly examines the application of aerogel-based technologies in the oil and gas sector, focusing particularly on their role in carbon capture and storage (CCS) initiatives. Aerogels, known for their exceptional properties, such as high surface area, low density, and customizable pore structure, have garnered attention for their potential in various CCS strategies. The review delves into various fabrication techniques utilized in producing aerogel materials, including sol-gel, supercritical drying, and freeze-drying methods, to assess their suitability for specific industry applications. Beyond fabrication, the practicality of aerogel materials in critical areas such as flow assurance, enhanced oil recovery, and thermal insulation is explored. The analysis spans a wide range of applications, from potential use in pipelines and equipment to subsea installations, offering valuable insights into the real-world implementation of aerogels in the oil and gas sector. The paper also investigates the adsorption and storage capabilities of aerogel-based sorbents, showcasing their effectiveness in capturing and storing carbon dioxide (CO₂) molecules. Optimization of pore size distribution and surface chemistry is examined to enhance the affinity and selectivity of aerogels towards CO₂, thereby improving the efficiency and capacity of CCS systems. Additionally, the study explores the potential of aerogel-based membranes for separating and purifying CO₂ from oil and gas streams, emphasizing their role in the carbon capture and utilization (CCU) value chain in the industry. Emerging trends and future perspectives in integrating aerogel-based technologies within the oil and gas sector are also discussed, including the development of hybrid aerogel composites and advanced functional components to further enhance material performance and versatility. By synthesizing the latest advancements and future directions in aerogel used for CCS applications in the oil and gas industry, this review offers a comprehensive understanding of how these innovative materials can aid in transitioning towards a more sustainable and environmentally conscious energy landscape. The insights provided can assist in strategic decision-making, drive technology development, and foster collaborations among academia, industry, and policymakers to promote the widespread adoption of aerogel-based solutions in the oil and gas sector.

Keywords: CCS, porous, carbon capture, oil and gas, sustainability

Procedia PDF Downloads 13
1027 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 93
1026 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.

Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery

Procedia PDF Downloads 158
1025 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 227
1024 A Comparative Case Study of Institutional Work in Public Sector Organizations: Creating Knowledge Management Practice

Authors: Dyah Adi Sriwahyuni

Abstract:

Institutional work has become a prominent and contemporary institutional theory perspective in organization studies. A wealth of studies in organizations have explored actor activities in creating, maintaining, and disrupting institutions at the field level. However, the exploration of the work of actors in creating new management practices at the organizational level has been somewhat limited. The current institutional work literature mostly describes the work of actors at the field level and ignores organizational actors who work to realize management practices. Organizational actors here are defined as actors in organizations who work to institutionalize a particular management practice within the organizations. The extant literature has also generalized the types of management practices, which meant overlooking the unique characteristics of each management fashion as well as a management practice. To fill these gaps, this study aims to provide empirical evidence so as to contribute theoretically to institutional work through a comparative case study on organizational actors’ creation of knowledge management (KM) practice in two public sector organizations in Indonesia. KM is a contemporary management practice employed to manage individual and organizational knowledge in order to improve organizational performance. This practice presents a suitable practical setting with which to provide a rich understanding of the organizational actors’ institutional work and their connection with technology. Drawing on and extending the work of Perkmann and Spicer (2008), this study explores the forms of institutional work performed by organizational actors, including their motivation, skills, challenges, and opportunities. The primary data collection is semi-structured interviews with knowledgeable actors and document analysis for validity and triangulation. Following Eisenhardt's cross-case patterns, the researcher analyzed the collected data focusing on within-group similarities and intergroup differences. The researcher coded interview data using NVivo and used documents to corroborate the findings. The study’s findings add to the wealth of institutional theory literature in organization studies, particularly institutional work related to management practices. This study builds a theory about the work of organizational actors in creating knowledge management practices. Using the perspective of institutional work, research can show the roles of the various actors involved, their practices, and their relationship to technology (materiality), not only focusing on actors with a power which has been the theorizing of institutional entrepreneurship. The development of knowledge management practices in the Indonesian public sector is also a significant additional contribution, given that the current KM literature is dominated by conceptualizing the KM framework and the impact of KM on organizations. The public sector, which is the research setting, also provides important lessons on how actors in a highly institutionalized context are creating an institution, in this case, a knowledge management practice.

Keywords: institutional work, knowledge management, case study, public sector organizations

Procedia PDF Downloads 93
1023 Working Towards More Sustainable Food Waste: A Circularity Perspective

Authors: Rocío González-Sánchez, Sara Alonso-Muñoz

Abstract:

Food waste implies an inefficient management of the final stages in the food supply chain. Referring to Sustainable Development Goals (SDGs) by United Nations, the SDG 12.3 proposes to halve per capita food waste at the retail and consumer level and to reduce food losses. In the linear system, food waste is disposed and, to a lesser extent, recovery or reused after consumption. With the negative effect on stocks, the current food consumption system is based on ‘produce, take and dispose’ which put huge pressure on raw materials and energy resources. Therefore, greater focus on the circular management of food waste will mitigate the environmental, economic, and social impact, following a Triple Bottom Line (TBL) approach and consequently the SDGs fulfilment. A mixed methodology is used. A total sample of 311 publications from Web of Science database were retrieved. Firstly, it is performed a bibliometric analysis by SciMat and VOSviewer software to visualise scientific maps about co-occurrence analysis of keywords and co-citation analysis of journals. This allows for the understanding of the knowledge structure about this field, and to detect research issues. Secondly, a systematic literature review is conducted regarding the most influential articles in years 2020 and 2021, coinciding with the most representative period under study. Thirdly, to support the development of this field it is proposed an agenda according to the research gaps identified about circular economy and food waste management. Results reveal that the main topics are related to waste valorisation, the application of waste-to-energy circular model and the anaerobic digestion process towards fossil fuels replacement. It is underlined that the use of food as a source of clean energy is receiving greater attention in the literature. There is a lack of studies about stakeholders’ awareness and training. In addition, available data would facilitate the implementation of circular principles for food waste recovery, management, and valorisation. The research agenda suggests that circularity networks with suppliers and customers need to be deepened. Technological tools for the implementation of sustainable business models, and greater emphasis on social aspects through educational campaigns are also required. This paper contributes on the application of circularity to food waste management by abandoning inefficient linear models. Shedding light about trending topics in the field guiding to scholars for future research opportunities.

Keywords: bibliometric analysis, circular economy, food waste management, future research lines

Procedia PDF Downloads 90
1022 Technology Changing Senior Care

Authors: John Kosmeh

Abstract:

Introduction – For years, senior health care and skilled nursing facilities have been plagued with the dilemma of not having the necessary tools and equipment to adequately care for senior residents in their communities. This has led to high transport rates to emergency departments and high 30-day readmission rates, costing billions of unnecessary dollars each year, as well as quality assurance issues. Our Senior care telemedicine program is designed to solve this issue. Methods – We conducted a 1-year pilot program using our technology coupled with our 24/7 telemedicine program with skilled nursing facilities in different parts of the United States. We then compared transports rates and 30-day readmission rates to previous years before the use of our program, as well as transport rates of other communities of similar size not using our program. This data was able to give us a clear and concise look at the success rate of reducing unnecessary transport and readmissions as well as cost savings. Results – A 94% reduction nationally of unnecessary out-of-facility transports, and to date, complete elimination of 30-day readmissions. Our virtual platform allowed us to instruct facility staff on the utilization of our tools and system as well as deliver treatment by our ER-trained providers. Delay waiting for PCP callbacks was eliminated. We were able to obtain lung, heart, and abdominal ultrasound imaging, 12 lead EKG, blood labs, auscultate lung and heart sounds, and collect other diagnostic tests at the bedside within minutes, providing immediate care and allowing us to treat residents within the SNF. Are virtual capabilities allowed for loved ones, family members, and others who had medical power of attorney to virtually connect with us at the time of visit, to speak directly with the medical provider, providing increased confidence in the decision to treat the resident in-house. The decline in transports and readmissions will greatly reduce governmental cost burdens, as well as fines imposed on SNF for high 30-day readmissions, reduce the cost of Medicare A readmissions, and significantly impact the number of patients visiting overcrowded ERs. Discussion – By utilizing our program, SNF can effectively reduce the number of unnecessary transports of residents, as well as create significant savings from loss of day rates, transportation costs, and high CMS fines. The cost saving is in the thousands monthly, but more importantly, these facilities can create a higher quality of life and medical care for residents by providing definitive care instantly with ER-trained personnel.

Keywords: senior care, long term care, telemedicine, technology, senior care communities

Procedia PDF Downloads 82
1021 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime

Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda

Abstract:

Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.

Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels

Procedia PDF Downloads 109
1020 Pickering Dry Emulsion System for Dissolution Enhancement of Poorly Water Soluble Drug (Fenofibrate)

Authors: Nitin Jadhav, Pradeep R. Vavia

Abstract:

Poor water soluble drugs are difficult to promote for oral drug delivery as they demonstrate poor and variable bioavailability because of its poor solubility and dissolution in GIT fluid. Nowadays lipid based formulations especially self microemulsifying drug delivery system (SMEDDS) is found as the most effective technique. With all the impressive advantages, the need of high amount of surfactant (50% - 80%) is the major drawback of SMEDDS. High concentration of synthetic surfactant is known for irritation in GIT and also interference with the function of intestinal transporters causes changes in drug absorption. Surfactant may also reduce drug activity and subsequently bioavailability due to the enhanced entrapment of drug in micelles. In chronic treatment these issues are very conspicuous due to the long exposure. In addition the liquid self microemulsifying system also suffers from stability issues. Recently one novel approach of solid stabilized micro and nano emulsion (Pickering emulsion) has very admirable properties such as high stability, absence or very less concentration of surfactant and easily converts into the dry form. So here we are exploring pickering dry emulsion system for dissolution enhancement of anti-lipemic, extremely poorly water soluble drug (Fenofibrate). Oil moiety for emulsion preparation was selected mainly on the basis of higher solubility of drug. Captex 300 was showed higher solubility for fenofibrate, hence selected as oil for emulsion. With Silica (solid stabilizer); Span 20 was selected to improve the wetting property of it. Emulsion formed by Silica and Span20 as stabilizer at the ratio 2.5:1 (silica: span 20) was found very stable at the particle size 410 nm. The prepared emulsion was further preceded for spray drying and formed microcapsule evaluated for in-vitro dissolution study, in-vivo pharmacodynamic study and characterized for DSC, XRD, FTIR, SEM, optical microscopy etc. The in vitro study exhibits significant dissolution enhancement of formulation (85 % in 45 minutes) as compared to plain drug (14 % in 45 minutes). In-vivo study (Triton based hyperlipidaemia model) exhibits significant reduction in triglyceride and cholesterol with formulation as compared to plain drug indicating increasing in fenofibrate bioavailability. DSC and XRD study exhibit loss of crystallinity of drug in microcapsule form. FTIR study exhibit chemical stability of fenofibrate. SEM and optical microscopy study exhibit spherical structure of globule coated with solid particles.

Keywords: captex 300, fenofibrate, pickering dry emulsion, silica, span20, stability, surfactant

Procedia PDF Downloads 487
1019 The Convention of Culture: A Comprehensive Study on Dispute Resolution Pertaining to Heritage and Related Issues

Authors: Bhargavi G. Iyer, Ojaswi Bhagat

Abstract:

In recent years, there has been a lot of discussion about ethnic imbalance and diversity in the international context. Arbitration is now subject to the hegemony of a small number of people who are constantly reappointed. When a court system becomes exclusionary, the quality of adjudication suffers significantly. In such a framework, there is a misalignment between adjudicators' preconceived views and the interests of the parties, resulting in a biased view of the proceedings. The world is currently witnessing a slew of intellectual property battles around cultural appropriation. The term "cultural appropriation" refers to the industrial west's theft of indigenous culture, usually for fashion, aesthetic, or dramatic purposes. Selena Gomez exemplifies cultural appropriation by commercially using the “bindi,” which is sacred to Hinduism, as a fashion symbol. In another case, Victoria's Secret insulted indigenous peoples' genocide by stealing native Indian headdresses. In the case of yoga, a similar process can be witnessed, with Vedic philosophy being reduced to a type of physical practice. Such a viewpoint is problematic since indigenous groups have worked hard for generations to ensure the survival of their culture, and its appropriation by the western world for purely aesthetic and theatrical purposes is upsetting to those who practise such cultures. Because such conflicts involve numerous jurisdictions, they must be resolved through international arbitration. However, these conflicts are already being litigated, and the aggrieved parties, namely developing nations, do not believe it prudent to use the World Intellectual Property Organization's (WIPO) already established arbitration procedure. This practise, it is suggested in this study, is the outcome of Europe's exclusionary arbitral system, which fails to recognise the non-legal and non-commercial nature of indigenous culture issues. This research paper proposes a more comprehensive, inclusive approach that recognises the non-legal and non-commercial aspects of IP disputes involving cultural appropriation, which can only be achieved through an ethnically balanced arbitration structure. This paper also aspires to expound upon the benefits of arbitration and other means of alternative dispute resolution (ADR) in the context of disputes pertaining to cultural issues; positing that inclusivity is a solution to the existing discord between international practices and localised cultural points of dispute. This paper also hopes to explicate measures that will facilitate ensuring inclusion and ideal practices in the domain of arbitration law, particularly pertaining to cultural heritage and indigenous expression.

Keywords: arbitration law, cultural appropriation, dispute resolution, heritage, intellectual property

Procedia PDF Downloads 129
1018 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 78
1017 Transformations of River Zones in Hanoi, Vietnam: Problems of Urban Drainage and Environmental Pollution

Authors: Phong Le Ha

Abstract:

In many cities the entire world, the relationship between cities and rivers is always considered as a fundament of urban history research because of their profound interactions. This kind of relationship makes the river zones become extremely sensitive in many aspects. One of the most important aspect is their roles in the drainage of cities. In this paper we will examine an extraordinary case of Hanoi, the capital of Vietnam and Red river zones. This river has contradictory impacts to this city: It is considered as a source of life of the inhabitants who live along its two banks, however, the risk of inundation caused by the complicated hydrology system of this river is always a real threat to the cities that it flows through. Morphologically, Red river was connected to the inner rivers system that made Hanoi a complete form of a river city. This structure combined with the topography of Hanoi helps this city to assure a stable drainage system in which the river zones in the north of Hanoi play some extreme important roles. Nevertheless, in the late 20 years, Hanoi's strong urbanization and the instability of Red river's complicated hydrology make the very remarkable transformations in the relationship river-city and in the river zones: The connection between the river and the city declines; the system of inner lakes are progressively replaced by habitat land; in the river zones, the infrastructure system can't adapt to the transformations of the new quarters which have the origin of the agricultural villages. These changes bring out many chances for the urban development, but also many risks and problems, particularly in the environment and technical sides. Among these, pluvial and used water evacuation is one of the most severe problems. The disappear of inner-city lakes, the high dike and the topographical changes of Hanoi blow up the risk of inundation of this city. In consequences, the riverine zones, particularly in the north of Hanoi, where the two most important water evacuation rivers of Hanoi meet each other, are burdened with the drainage pressure. The unique water treatment plant in this zone seems to be overcharged in receiving each day about 40000m3 of used water (not include pluvial water). This kind of problem leads also to another risk related to the environmental pollution (water pollution and air pollution). So, in order to better understand the situation and to propose the solutions to resolve the problems, an interdisciplinary research covering many different fields such urban planning, architecture, geography, and especially drainage and environment has been carried out. In general, this paper will analyze an important part of the research : the process of urban transformation of Hanoi (changes in urban morphology, infrastructure system, evolution of the dike system, ...) and the hydrological changes of Red river which cause the drainage and environmental problems. The conclusions of these analyses will be the solid base of the following researches focusing on the solutions of a sustainable development.

Keywords: drainage, environment, Hanoi, infrastructure, red rivers, urbanization

Procedia PDF Downloads 375
1016 Socio-Political Crisis in the North West and South West Regions of Cameroon and the Emergence of New Cultures

Authors: Doreen Mekunda

Abstract:

This paper is built on the premise that the current socio-political crisis in the two restive regions of Cameroon, though enveloped with destructive and devastating trends (effects) on both property and human lives, is not without its strengths and merits. It is incontestable that many cultures, to a greater extent, are going to be destroyed as people forcibly move from war-stricken habitats to non-violent places. Many cultural potentials, traditional shrines, artifacts, art, and crafts, etc., are unknowingly or knowingly disfigured, and many other ugly things will, by the end of the crisis, affect the cultures of these two regions under siege and of the receiving population. A plethora of other problems like the persecution of Internally Displaced Persons (IDPs) for being displaced and blamed for increased crime rates and the existence of cultural and ethnic differences that produce both inter-tribal and interpersonal conflicts and conflicts between communities will abound. However, there is the emergence of rapid literature, and other forms of cultural productions, whether written or oral, is visible, thereby precipitating a rich cultural diversity due to the coming together of a variety of cultures of both the IDPs and the receiving populations, rapid urbanization, improvement of health-related issues, the rebirth of indigenous cultural practices, the development of social and lingua-cultural competences, dependence on alternative religions, faith and spirituality. Even financial and economic dependence, though a burden to others by IDPs, has its own merits as it improves the living standards of the IDPs. To be able to obtain plausible results, cultural materialism, which is a literary theory that hinges on the empirical study of socio-cultural systems within a materialist infrastructure-super-structure framework, is employed together with the postcolonial theory. Postcolonial theory because the study deals with postcolonial experiences/tenets of migration, hybridity, ethnicity, indignity, language, double consciousness, migration, center/margin binaries, and identity, amongst others. The study reveals that the involuntary movement of persons from their habitual homes brings about movement in cultures, thus, the emergence of new cultures. The movement of people who hold fast to their cultural heritage can only influence new forms of literature, the development of new communication competences, the rise of alternative religion, faith and spirituality, the re-emergence of customary and traditional legal systems that might have been abandoned for the new judicial systems, and above all the revitalization of traditional health care systems.

Keywords: alternative religion, emergence, socio-political crisis, spirituality, lingua-cultural competences

Procedia PDF Downloads 158
1015 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall

Authors: Sylvain Perrin, Thomas Saillour

Abstract:

The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.

Keywords: beach, impacts, scour, seawall, waves

Procedia PDF Downloads 141
1014 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 65
1013 Unlocking Intergenerational Abortion Stories in Gardiennes By Fanny Cabon

Authors: Lou Gargouri

Abstract:

This paper examines how Fanny Cabon's solo performance, Gardiennes (2018) strategically crafts empathetic witnessing through the artist's vocal and physical embodiment of her female ancestors' testimonies, dramatizing the cyclical inheritance of reproductive trauma across generations. Drawing on affect theory and the concept of ethical co-presence, we argue that Cabon's raw voicing of illegal abortions, miscarriages, and abuse through her shape-shifting presence generates an intimate energy loop with the audience. This affective resonance catalyzes recognition of historical injustices, consecrating each singular experience while building collective solidarity. Central to Cabon's political efficacy is her transparent self-revelation through intimate impersonation, which fosters identification with diverse characters as interconnected subjects rather than objectified others. Her solo form transforms the isolation often associated with women's marginalization into radical inclusion, repositioning them from victims to empowered survivors. Comparative analysis with other contemporary works addressing abortion rights illuminates how Gardiennes subverts the traditional medical and clerical gazes that have long governed women's bodies. Ultimately, we contend Gardiennes models the potential of solo performance to harness empathy as a subversive political force. Cabon's theatrical alchemy circulates the effects of injustice through the ethical co-presence of performer and spectator, forging intersubjective connections that reframe marginalized groups traditionally objectified within dominant structures of patriarchal power. In dramatizing how the act of witnessing another's trauma can generate solidarity and galvanize resistance, Cabon's work demonstrates the role of embodied performance in catalyzing social change through the recuperation of women's voices and lived experiences. This paper thus aims to contribute to the emerging field of feminist solo performance criticism by illuminating how Cabon's innovative dramaturgy bridges the personal and the political. Her strategic mobilization of intimacy, identification, and co-presence offers a model for how the affective dynamics of autobiographical performance can be harnessed to confront gendered oppression and imagine more equitable futures. Gardiennes invites us to consider how the circulation of empathy through ethical spectatorship can foster the collective alliances necessary for advancing the unfinished project of women's liberation.

Keywords: gender and sexuality studies, solo performance, trauma studies, affect theory

Procedia PDF Downloads 36
1012 Environmental Performance of Different Lab Scale Chromium Removal Processes

Authors: Chiao-Cheng Huang, Pei-Te Chiueh, Ya-Hsuan Liou

Abstract:

Chromium-contaminated wastewater from electroplating industrial activity has been a long-standing environmental issue, as it can degrade surface water quality and is harmful to soil ecosystems. The traditional method of treating chromium-contaminated wastewater has been to use chemical coagulation processes. However, this method consumes large amounts of chemicals such as sulfuric acid, sodium hydroxide, and sodium bicarbonate in order to remove chromium. However, a series of new methods for treating chromium-containing wastewater have been developed. This study aimed to compare the environmental impact of four different lab scale chromium removal processes: 1.) chemical coagulation process (the most common and traditional method), in which sodium metabisulfite was used as reductant, 2.) electrochemical process using two steel sheets as electrodes, 3.) reduction by iron-copper bimetallic powder, and 4.) photocatalysis process by TiO2. Each process was run in the lab, and was able to achieve 100% removal of chromium in solution. Then a Life Cycle Assessment (LCA) study was conducted based on the experimental data obtained from four different case studies to identify the environmentally preferable alternative to treat chromium wastewater. The model used for calculating the environmental impact was TRACi, and the system scope includes the production phase and use phase of chemicals and electricity consumed by the chromium removal processes, as well as the final disposal of chromium containing sludge. The functional unit chosen in this study was the removal of 1 mg of chromium. Solution volume of each case study was adjusted to 1 L in advance and the chemicals and energy consumed were proportionally adjusted. The emissions and resources consumed were identified and characterized into 15 categories of midpoint impacts. The impact assessment results show that the human ecotoxicity category accounts for 55 % of environmental impact in Case 1, which can be attributed to the sulfuric acid used for pH adjustment. In Case 2, production of steel sheet electrodes is an energy-intensive process, thus contributed to 20 % of environmental impact. In Case 3, sodium bicarbonate is used as an anti-corrosion additive, which results mainly in 1.02E-05 Comparative Toxicity Unit (CTU) in the human toxicity category and 0.54E-05 (CTU) in acidification of air. In Case 4, electricity consumption for power supply of UV lamp gives 5.25E-05 (CTU) in human toxicity category, 1.15E-05 (kg Neq) in eutrophication. In conclusion, Case 3 and Case 4 have higher environmental impacts than Case 1 and Case 2, which can be attributed mostly to higher energy and chemical consumption, leading to high impacts in the global warming and ecotoxicity categories.

Keywords: chromium, lab scale, life cycle assessment, wastewater

Procedia PDF Downloads 247
1011 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders

Authors: Brad Stappenbelt

Abstract:

The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.

Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy

Procedia PDF Downloads 135
1010 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions

Authors: Mingming Leng

Abstract:

Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.

Keywords: bargaining, game theory, pricing, quality, supply chain

Procedia PDF Downloads 262
1009 Collateral Impact of Water Resources Development in an Arsenic Affected Village of Patna District

Authors: Asrarul H. Jeelani

Abstract:

Arsenic contamination of groundwater and its’ health implications in lower Gangetic plain of Indian states started reporting in the 1980s. The same period was declared as the first water decade (1981-1990) to achieve ‘water for all.’ To fulfill the aim, the Indian government, with the support of international agencies installed millions of hand-pumps through water resources development programs. The hand-pumps improve the accessibility if the groundwater, but over-extraction of it increases the chances of mixing of trivalent arsenic which is more toxic than pentavalent arsenic of dug well water in Gangetic plain and has different physical manifestations. Now after three decades, Bihar (middle Gangetic plain) is also facing arsenic contamination of groundwater and its’ health implications. Objective: This interdisciplinary research attempts to understand the health and social implications of arsenicosis among different castes in Haldi Chhapra village and to find the association of ramifications with water resources development. Methodology: The Study used concurrent quantitative dominant mix method (QUAN+qual). The researcher had employed household survey, social mapping, interviews, and participatory interactions. However, the researcher used secondary data for retrospective analysis of hand-pumps and implications of arsenicosis. Findings: The study found 88.5% (115) household have hand-pumps as a source of water however 13.8% uses purified supplied water bottle and 3.6% uses combinations of hand-pump, bottled water and dug well water for drinking purposes. Among the population, 3.65% of individuals have arsenicosis, and 2.72% of children between the age group of 5 to 15 years are affected. The caste variable has also emerged through quantitative as well as geophysical locations analysis as 5.44% of arsenicosis manifested individual belong to scheduled caste (SC), 3.89% to extremely backward caste (EBC), 2.57% to backward caste (BC) and 3% to other. Among three clusters of arsenic poisoned locations, two belong to SC and EBC. The village as arsenic affected is being discriminated, whereas the affected individual is also facing discrimination, isolation, stigma, and problem in getting married. The forceful intervention to install hand-pumps in the first water decades and later restructuring of the dug well destroyed a conventional method of dug well cleaning. Conclusion: The common manifestation of arsenicosis has increased by 1.3% within six years of span in the village. This raised the need for setting up a proper surveillance system in the village. It is imperative to consider the social structure for arsenic mitigation program as this research reveals caste as a significant factor. The health and social implications found in the study; retrospectively analyzed as the collateral impact of water resource development programs in the village.

Keywords: arsenicosis, caste, collateral impact, water resources

Procedia PDF Downloads 97
1008 A Study of Bilingual Development of a Mandarin and English Bilingual Preschool Child from China to Australia

Authors: Qiang Guo, Ruying Qi

Abstract:

This project aims to trace the developmental patterns of a child's Mandarin and English from China to Australia from age 3; 03 till 5; 06. In childhood bilingual studies, there is an assumption that age 3 is the dividing line between simultaneous bilinguals and sequential bilinguals. Determining similarities and differences between Bilingual First Language Acquisition, Early Second Language Acquisition, and Second Language Acquisition is of great theoretical significance. Studies on Bilingual First Language Acquisition, hereafter, BFLA in the past three decades have shown that the grammatical development of bilingual children progresses through the same developmental trajectories as their monolingual counterparts. Cross-linguistic interaction does not show changes of the basic grammatical knowledge, even in the weaker language. While BFLA studies show consistent results under the conditions of adequate input and meaningful interactional context, the research findings of Early Second Language Acquisition (ESLA) have demonstrated that this cohort proceeds their early English differently from both BFLA and SLA. The different development could be attributed to the age of migration, input pattern, and their Environmental Languages (Lε). In the meantime, the dynamic relationship between the two languages is an issue to invite further attention. The present study attempts to fill this gap. The child in this case study started acquiring L1 Mandarin from birth in China, where the environmental language (Lε) coincided with L1 Mandarin. When she migrated to Australia at 3;06, where the environmental language (Lε) was L2 English, her Mandarin exposure was reduced. On the other hand, she received limited English input starting from 1; 02 in China, where the environmental language (Lε) was L1 Mandarin, a non-English environment. When she relocated to Australia at 3; 06, where the environmental language (Lε) coincided with L2 English, her English exposure significantly increased. The child’s linguistic profile provides an opportunity to explore: (1) What does the child’s English developmental route look like? (2) What does the L1 Mandarin developmental pattern look like in different environmental languages? (3) How do input and environmental language interact in shaping the bilingual child’s linguistic repertoire? In order to answer these questions, two linguistic areas are selected as the focus of the investigation, namely, subject realization and wh-questions. The chosen areas are contrastive in structure but perform the same semantic functions in the two linguistically distant languages and can serve as an ideal testing ground for exploring the developmental path in the two languages. The longitudinal case study adopts a combined approach of qualitative and quantitative analysis. Two years’ Mandarin and English data are examined, and comparisons are made with age-matched monolinguals in each language in CHILDES. To the author’s best knowledge, this study is the first of this kind examining a Mandarin-English bilingual child's bilingual development at a critical age, in different input patterns, and in different environmental languages (Lε). It also expands the scope of the theory of Lε, adding empirical evidence on the relationship between input and Lε in bilingual acquisition.

Keywords: bilingual development, age, input, environmental language (Le)

Procedia PDF Downloads 119
1007 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems

Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability

Procedia PDF Downloads 407
1006 The Validation and Reliability of the Arabic Effort-Reward Imbalance Model Questionnaire: A Cross-Sectional Study among University Students in Jordan

Authors: Mahmoud M. AbuAlSamen, Tamam El-Elimat

Abstract:

Amid the economic crisis in Jordan, the Jordanian government has opted for a knowledge economy where education is promoted as a mean for economic development. University education usually comes at the expense of study-related stress that may adversely impact the health of students. Since stress is a latent variable that is difficult to measure, a valid tool should be used in doing so. The effort-reward imbalance (ERI) is a model used as a measurement tool for occupational stress. The model was built on the notion of reciprocity, which relates ‘effort’ to ‘reward’ through the mediating ‘over-commitment’. Reciprocity assumes equilibrium between both effort and reward, where ‘high’ effort is adequately compensated with ‘high’ reward. When this equilibrium is violated (i.e., high effort with low reward), this may elicit negative emotions and stress, which have been correlated to adverse health conditions. The theory of ERI was established in many different parts of the world, and associations with chronic diseases and the health of workers were explored at length. While much of the effort-reward imbalance was investigated in work conditions, there has been a growing interest in understanding the validity of the ERI model when applied to other social settings such as schools and universities. The ERI questionnaire was developed in Arabic recently to measure ERI among high school teachers. However, little information is available on the validity of the ERI questionnaire in university students. A cross-sectional study was conducted on 833 students in Jordan to measure the validity and reliability of the ERI questionnaire in Arabic among university students. Reliability, as measured by Cronbach’s alpha of the effort, reward, and overcommitment scales, was 0.73, 0.76, and 0.69, respectively, suggesting satisfactory reliability. The factorial structure was explored using principal axis factoring. The results fitted a five-solution model where both the effort and overcommitment were uni-dimensional while the reward scale was three-dimensional with its factors, namely being ‘support’, ‘esteem’, and ‘security’. The solution explained 56% of the variance in the data. The established ERI theory was replicated with excellent validity in this study. The effort-reward ratio in university students was 1.19, which suggests a slight degree of failed reciprocity. The study also investigated the association of effort, reward, overcommitment, and ERI with participants’ demographic factors and self-reported health. ERI was found to be significantly associated with absenteeism (p < 0.0001), past history of failed courses (p=0.03), and poor academic performance (p < 0.001). Moreover, ERI was found to be associated with poor self-reported health among university students (p=0.01). In conclusion, the Arabic ERI questionnaire is reliable and valid for use in measuring effort-reward imbalance in university students in Jordan. The results of this research are important in informing higher education policy in Jordan.

Keywords: effort-reward imbalance, factor analysis, validity, self-reported health

Procedia PDF Downloads 104
1005 Causes of Non-Compliance With Public Procurement Act, 2007 Among Some Selected State Own Public Tertiary Education Institutions in Southwest, Nigeria

Authors: Ibitoye Olabode Clement

Abstract:

The huge amount of grants for infrastructures development in Tertiary Institutions in Nigeria calls for transparency and accountability in the procurement process. However, questions have been raised concerning the judicious and appropriate use of the funds, and it was doubtful if the institutions complied with due process. This paper examined the causes of non-compliance with the Public Procurement Act (2007) in the procurement of Goods, Works, and Services through either direct or indirect processes of procurement, mostly in Tertiary Institutions of State government subvention institutions in Nigeria. Nigeria has over 120 public universities, polytechnics, and colleges of Education. This paper will take samples of some selected Institutions in southwest Nigeria. The institutions comprise 5 Universities, 5 Polytechnics, and 5 Colleges of Education / Health and Technology. The opinions of the institutions’ Procurement Officers on the tremendous investment through grants and interventions for infrastructure development in Tertiary Education Institutions (TEI) in Nigeria call for transparency and accountability in the procurement process. However, there are a lot of questions have been raised as to the judicious use of the funds, and it was doubtful if the institutions complied with due process. This study examined the causes of non-compliance with the Public Procurement Act (2007) in the procurement of Goods, Works, and Services in most State Government Public Institutions in Southwest Nigeria. Over, 120 public institutions comprising 5 Universities, 5 Polytechnics, and 5 Colleges of Education / Health and Technology were used for the study. The opinions of the institutions’ Procurement Officers on the causes of non-compliance with the Act in their procurement process were sought using a structured questionnaire. The results revealed that non-independent of Procurement Officers, non-compliance with the Act by some at the managerial level, claiming inadequate knowledge of the Act, non-employment of qualified and experienced Procurement officers, insufficient publicity of the Act, and non-existence of corporate governance led to poor management of procurement record and non-provision of incentive, Inability to separate the duties of Internal Auditors and Procurement Officers, Inability to translate procurement entity at large which makes nearly all at departmental level believe they procurement officers. Conclusively, on taking the Procurement Officers through interviewing having it that: the right educational and professional qualifications, understanding of the Act, sufficient cognate working experience, recruiting most professionals needed if not all, and occupying management position will enhance compliance. Hence, in addition, adopting an external empowered department from the Bureau should raise for monitoring the compliance mostly in State Government Tertiary Education Institution. Also, an organizational culture with a corporate governance structure that supports the engagement of the right and qualified personnel to handle procurement, encourages them to perform at their best and rewards excellent service by giving incentives, and operates within an administrative environment devoid of corruption.

Keywords: non compliance of procurement act, tertiary education institution, university, polytechnic and college of education/ health science and technology, Nigeria

Procedia PDF Downloads 90
1004 A Study of a Diachronic Relationship between Two Weak Inflection Classes in Norwegian, with Emphasis on Unexpected Productivity

Authors: Emilija Tribocka

Abstract:

This contribution presents parts of an ongoing study of a diachronic relationship between two weak verb classes in Norwegian, the a-class (cf. the paradigm of ‘throw’: kasta – kastar – kasta – kasta) and the e-class (cf. the paradigm of ‘buy’: kjøpa – kjøper – kjøpte – kjøpt). The study investigates inflection class shifts between the two classes with Old Norse, the ancestor of Modern Norwegian, as a starting point. Examination of inflection in 38 verbs in four chosen dialect areas (106 places of attestations) demonstrates that the shifts from the a-class to the e-class are widespread to varying degrees in three out of four investigated areas and are more common than the shifts in the opposite direction. The diachronic productivity of the e-class is unexpected for several reasons. There is general agreement that type frequency is an important factor influencing productivity. The a-class (53% of all weak verbs) was more type frequent in Old Norse than the e-class (42% of all weak verbs). Thus, given the type frequency, the expansion of the e-class is unexpected. Furthermore, in the ‘core’ areas of expanded e-class inflection, the shifts disregard phonological principles creating forms with uncomfortable consonant clusters, e.g., fiskte instead of fiska, the preterit of fiska ‘fish’. Later on, these forms may be contracted, i.e., fiskte > fiste. In this contribution, two factors influencing the shifts are presented: phonological form and token frequency. Verbs with the stem ending in a consonant cluster, particularly when the cluster ends in -t, hardly ever shift to the e-class. As a matter of fact, verbs with this structure belonging to the e-class in Old Norse shift to the a-class in Modern Norwegian, e.g., ON e-class verb skipta ‘change’ shifts to the a-class. This shift occurs as a result of the lack of morpho-phonological transparency between the stem and the preterit suffix of the e-class, -te. As there is a phonological fusion between the stem ending in -t and the suffix beginning in -t, the transparent a-class inflection is chosen. Token frequency plays an important role in the shifts, too, in some dialects. In one of the investigated areas, the most token frequent verbs of the ON e-class remain in the e-class (e.g., høyra ‘hear’, leva ‘live’, kjøpa ‘buy’), while less frequent verbs may shift to the a-class. Furthermore, the results indicate that the shift from the a-class to the e-class occurs in some of the most token frequent verbs of the ON a-class in this area, e.g., lika ‘like’, lova ‘promise’, svara ‘answer’. The latter is unexpected as frequent items tend to remain stable. This study presents a case of unexpected productivity, demonstrating that minor patterns can grow and outdo major patterns. Thus, type frequency is not the only factor that determines productivity. The study addresses the role of phonological form and token frequency in the spread of inflection patterns.

Keywords: inflection class, productivity, token frequency, phonological form

Procedia PDF Downloads 46
1003 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 58