Search results for: highly efficient
443 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 474442 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 198441 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 77440 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 106439 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission
Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan
Abstract:
As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster
Procedia PDF Downloads 205438 A Rapid Assessment of the Impacts of COVID-19 on Overseas Labor Migration: Findings from Bangladesh
Authors: Vaiddehi Bansal, Ridhi Sahai, Kareem Kysia
Abstract:
Overseas labor migration is currently one of the most important contributors to the economy of Bangladesh and is a highly profitable form of labor for Gulf Cooperative Council (GCC) countries. In 2019, 700,159 migrant workers from Bangladeshtraveled abroad for employment. GCC countries are a major destination for Bangladeshi migrant workers, with Saudi Arabia being the most common destination for Bangladeshi migrant workers since 2016. Despite the high rate of migration between these countries every year, the OLR industry remains complex and often leaves migrants susceptible to human trafficking, forced labor, and modern slavery. While the prevalence of forced labor among Bangladeshi migrants in GCC countries is still unknown, the IOM estimates international migrant workers comprise one fourth of the victims of forced labor. Moreover, the onset of the global COVID-19 pandemic has exposed migrant workers to additional adverse situations, making them even more vulnerable to forced labor and health risks. This paper presents findings from a rapid assessment of the impacts of COVID-19 on OLR in Bangladesh, with an emphasis on the increased risk of forced labor among vulnerable migrant worker populations, particularly women.Rapid reviews are a useful approach to swiftly provide actionable evidence for informed decision-making during emergencies, such as the COVID-19 pandemic. The research team conducted semi-structured key information interviews (KIIs) with a range of stakeholders, including government officials, local NGOs, international organizations, migration researchers, and formal and informal recruiting agencies, to obtain insights on the multi-facted impacts of COVID-19 on the OLR sector. The research team also conducted a comprehensive review of available resources, including media articles, blogs, policy briefs, reports, white papers, and other online content, to triangulate findings from the KIIs. After screening for inclusion criteria, a total of 110 grey literature documents were included in the review. A total of 31 KIIs were conducted, data from which was transcribed and translated from Bangla to English, andanalyzed using a detailed codebook. Findings indicate that there was limited reintegration support for returnee migrants. Facing increasing amounts of debt, financial insecurity, and social discrimination, returnee migrants, were extremely vulnerable to forced labor and exploitation. Growing financial debt and limited job opportunities in their home country will likely push migrants to resort to unsafe migration channels. Evidence suggests that women, who are primarily domestic works in GCC countries, were exposed to increased risk of forced labor and workplace violence. Due to stay-at-home measures, women migrant workers were tasked with additional housekeeping working and subjected to longer work hours, wage withholding, and physical abuse. In Bangladesh, returnee women migrant workers also faced an increased risk of domestic violence.Keywords: forced labor, migration, gender, human trafficking
Procedia PDF Downloads 115437 Inverse Problem Method for Microwave Intrabody Medical Imaging
Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara
Abstract:
Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.Keywords: FDTD, time-reversed, medical imaging, microwave imaging
Procedia PDF Downloads 125436 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study
Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji
Abstract:
Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.Keywords: phase change materials, air-PCM exchangers, convection, conduction
Procedia PDF Downloads 176435 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 89434 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images
Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget
Abstract:
In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).Keywords: agricultural practices, remote sensing, rice, yield
Procedia PDF Downloads 274433 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta
Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi
Abstract:
Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin
Procedia PDF Downloads 115432 Gamification of eHealth Business Cases to Enhance Rich Learning Experience
Authors: Kari Björn
Abstract:
Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.Keywords: engineering education, integrated curriculum, learning experience, learning outcomes
Procedia PDF Downloads 239431 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 139430 Via ad Reducendam Intensitatem Energiae Industrialis in Provincia Sino ad Conservationem Energiae
Authors: John Doe
Abstract:
This paper presents the research project “Escape Through Culture”, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020 and the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The project implementation is assumed by three partners, (1) the Computer Technology Institute and Press "Diophantus" (CTI), experienced with the design and implementation of serious games, natural language processing and ICT in education, (2) the Laboratory of Environmental Communication and Audiovisual Documentation (LECAD), part of the University of Thessaly, Department of Architecture, which is experienced with the study of creative transformation and reframing of the urban and environmental multimodal experiences through the use of AR and VR technologies, and (3) “Apoplou”, an IT Company with experience in the implementation of interactive digital applications. The research project proposes the design of innovative infrastructure of digital educational escape games for mobile devices and computers, with the use of Virtual Reality and Augmented Reality for the promotion of Greek cultural heritage in Greece and abroad. In particular, the project advocates the combination of Greek cultural heritage and literature, digital technologies advancements and the implementation of innovative gamifying practices. The cultural experience of the players will take place in 3 layers: (1) In space: the digital games produced are going to utilize the dual character of the space as a cultural landscape (the real space - landscape but also the space - landscape as presented with the technologies of augmented reality and virtual reality). (2) In literary texts: the selected texts of Greek writers will support the sense of place and the multi-sensory involvement of the user, through the context of space-time, language and cultural characteristics. (3) In the philosophy of the "escape game" tool: whether played in a computer environment, indoors or outdoors, the spatial experience is one of the key components of escape games. The innovation of the project lies both in the junction of Augmented/Virtual Reality with the promotion of cultural points of interest, as well as in the interactive, gamified practices of literary texts. The digital escape game infrastructure will be highly interactive, integrating the projection of Greek landscape cultural elements and digital literary text analysis, supporting the creation of escape games, establishing and highlighting new playful ways of experiencing iconic cultural places, such as Elefsina, Skiathos etc. The literary texts’ content will relate to specific elements of the Greek cultural heritage depicted by prominent Greek writers and poets. The majority of the texts will originate from Greek educational content available in digital libraries and repositories developed and maintained by CTI. The escape games produced will be available for use during educational field trips, thematic tourism holidays, etc. In this paper, the methodology adopted for infrastructure development will be presented. The research is based on theories of place, gamification, gaming development, making use of corpus linguistics concepts and digital humanities practices for the compilation and the analysis of literary texts.Keywords: escape games, cultural landscapes, gamification, digital humanities, literature
Procedia PDF Downloads 244429 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks
Authors: Nicholas Aerne, John P. Parmigiani
Abstract:
There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply
Procedia PDF Downloads 204428 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction
Authors: Sundaram Chandrasekaran, Seung Hyun Hur
Abstract:
Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting
Procedia PDF Downloads 294427 Generating Biogas from Municipal Kitchen Waste: An Experience from Gaibandha, Bangladesh
Authors: Taif Rocky, Uttam Saha, Mahobul Islam
Abstract:
With a rapid urbanisation in Bangladesh, waste management remains one of the core challenges. Turning municipal waste into biogas for mass usage is a solution that Bangladesh needs to adopt urgently. Practical Action with its commitment to challenging poverty with technological justice has piloted such idea in Gaibandha. The initiative received immense success and drew the attention of policy makers and practitioners. We believe, biogas from waste can highly contribute to meet the growing demand for energy in the country at present and in the future. Practical Action has field based experience in promoting small scale and innovative technologies. We have proven track record in integrated solid waste management. We further utilized this experience to promote waste to biogas at end users’ level. In 2011, we have piloted a project on waste to biogas in Gaibandha, a northern secondary town of Bangladesh. With resource and support from UNICEF and with our own innovative funds we have established a complete chain of utilizing waste to the renewable energy source and organic fertilizer. Biogas is produced from municipal solid waste, which is properly collected, transported and segregated by private entrepreneurs. The project has two major focuses, diversification of biogas end use and establishing a public-private partnership business model. The project benefits include Recycling of Wastes, Improved institutional (municipal) capacity, Livelihood from improved services and Direct Income from the project. Project risks include Change of municipal leadership, Traditional mindset, Access to decision making, Land availability. We have observed several outcomes from the initiative. Up scaling such an initiative will certainly contribute for sustainable cleaner and healthier urban environment and urban poverty reduction. - It reduces the unsafe disposal of wastes which improve the cleanliness and environment of the town. -Make drainage system effective reducing the adverse impact of water logging or flooding. -Improve public health from better management of wastes. -Promotes usage of biogas replacing the use of firewood/coal which creates smoke and indoor air pollution in kitchens which have long term impact on health of women and children. -Reduce the greenhouse gas emission from the anaerobic recycling of wastes and contributes to sustainable urban environment. -Promote the concept of agroecology from the uses of bio slurry/compost which contributes to food security. -Creates green jobs from waste value chain which impacts on poverty alleviation of urban extreme poor. -Improve municipal governance from inclusive waste services and functional partnership with private sectors. -Contribute to the implementation of 3R (Reduce, Reuse, Recycle) Strategy and Employment Creation of extreme poor to achieve the target set in Vision 2021 by Government of Bangladesh.Keywords: kitchen waste, secondary town, biogas, segregation
Procedia PDF Downloads 221426 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations
Authors: Sarah Hazelwood, Janice Hay
Abstract:
Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.Keywords: analgesia, benefits, emergency, methoxyflurane
Procedia PDF Downloads 122425 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine
Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri
Abstract:
To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation
Procedia PDF Downloads 259424 Zinc Oxide Varistor Performance: A 3D Network Model
Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic
Abstract:
ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide
Procedia PDF Downloads 281423 Study Protocol: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta - A Singapore Case
Authors: Wee Tong Liaw, Elaine Wong Yee Sing
Abstract:
Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. The rationale for implementing a sustained health promoting workplace and participating in SHA is obvious when company management is convinced that healthier employees, business productivity, and profitability are positively correlated. However, performing research or empirical studies on the impact of a sustained health promoting workplace on stock returns are not likely to yield any interests in the absence of a systematic and independent assessment on the comprehensiveness and sustainability of a health promoting workplace in most developed economies. The principles of diversification and mean-variance efficient portfolio in Modern Portfolio Theory developed by Markowitz (1952) laid the foundation for the works of many financial economists and researchers, and among others, the development of the Capital Asset Pricing Model from the work of Sharpe (1964), Lintner (1965) and Mossin (1966), and the Fama-French Three-Factor Model of Fama and French (1992). This research seeks to support the rationale by studying whether there is a significant relationship or impact of a sustained health promoting workplace on the performance of companies listed on the SGX. The research shall form and test hypotheses pertaining to the impact of a sustained health promoting workplace on company’s performances, including stock returns, of companies that participated in the SHA and companies that did not participate in the SHA. In doing so, the research would be able to determine whether corporate and fund manager should consider the significance of a sustained health promoting workplace as a risk factor to explain the stock returns of companies listed on the SGX. With respect to Singapore’s stock market, this research will test the significance and relevance of a health promoting workplace using the Singapore Health Award as a proxy for non-diversifiable risk factor to explain stock returns. This study will examine the significance of a health promoting workplace on a company’s performance and study its impact on stock price performance and beta and examine if it has higher explanatory power than the traditional single factor asset pricing model CAPM (Capital Asset Pricing Model). To study the significance there are three key questions pertinent to the research study. I) Given a choice, would an investor be better off investing in a listed company with a sustained health promoting workplace i.e. a Singapore Health Award’s recipient? II) The Singapore Health Award has four levels of award starting from Bronze, Silver, Gold to Platinum. Would an investor be indifferent to the level of award when investing in a listed company who is a Singapore Health Award’s recipient? III) Would an asset pricing model combining FAMA-French Three Factor Model and ‘Singapore Health Award’ factor be more accurate than single factor Capital Asset Pricing Model and the Three Factor Model itself?Keywords: asset pricing model, company's performance, stock prices, sustained health promoting workplace
Procedia PDF Downloads 369422 Management Potentialities Of Rice Blast Disease Caused By Magnaporthe Grisae Using New Nanofungicides Derived From Chitosan
Authors: Abdulaziz Bashir Kutawa, Khairulmazmi Ahmad, Mohd Zobir Hussein, Asgar Ali, Mohd Aswad Abdul Wahab, Amara Rafi, Mahesh Tiran Gunasena, Muhammad Ziaur Rahman, Md Imam Hossain, Syazwan Afif Mohd Zobir
Abstract:
Various abiotic and biotic stresses have an impact on rice production all around the world. The most serious and prevalent disease in rice plants, known as rice blast, is one of the major obstacles to the production of rice. It is one of the diseases that has the greatest negative effects on rice farming globally, the disease is caused by a fungus called Magnaporthe grisae. Since nanoparticles were shown to have an inhibitory impact on certain types of fungus, nanotechnology is a novel notion to enhance agriculture by battling plant diseases. Utilizing nanocarrier systems enables the active chemicals to be absorbed, attached, and encapsulated to produce efficient nanodelivery formulations. The objectives of this research work were to determine the efficacy and mode of action of the nanofungicides (in-vitro) and in field conditions (in-vivo). Ionic gelation method was used in the development of the nanofungicides. Using the poisoned media method, the synthesized agronanofungicides' in-vitro antifungal activity was assessed against M. grisae. The potato dextrose agar (PDA) was amended in several concentrations; 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30, and 0.35 ppm for the nanofungicides. Medium with the only solvent served as a control. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Based on the results of the zone of inhibition, the chitosan-hexaconazole agronanofungicide (2g/mL) was the most effective fungicide to inhibit the growth of the fungus with 100% inhibition at 0.2, 0.25, 0.30, and 0.35 ppm, respectively. Then followed by carbendazim analytical fungicide that inhibited the growth of the fungus (100%) at 5, 10, 25, 50, and 100 ppm, respectively. The least were found to be propiconazole and basamid fungicides with 100% inhibition only at 100 ppm. The scanning electron microscope (SEM), confocal laser scanning microscope (CLSM), and transmission electron microscope (TEM) were used to study the mechanisms of action of the M. grisae fungal cells. The results showed that both carbendazim, chitosan-hexaconazole, and HXE were found to be the most effective fungicides in disrupting the mycelia of the fungus, and internal structures of the fungal cells. The results of the field assessment showed that the CHDEN treatment (5g/L, double dosage) was found to be the most effective fungicide to reduce the intensity of the rice blast disease with DSI of 17.56%, lesion length (0.43 cm), DR of 82.44%, AUDPC of 260.54 Unit2, and PI of 65.33%, respectively. The least treatment was found to be chitosan-hexaconazole-dazomet (2.5g/L, MIC). The usage of CHDEN and CHEN nanofungicides will significantly assist in lessening the severity of rice blast in the fields, increasing output and profit for rice farmers.Keywords: chitosan, hexaconazole, disease incidence, and magnaporthe grisae
Procedia PDF Downloads 68421 A Comparative Analysis on the Impact of the Prevention and Combating of Hate Crimes and Hate Speech Bill of 2016 on the Rights to Human Dignity, Equality, and Freedom in South Africa
Authors: Tholaine Matadi
Abstract:
South Africa is a democratic country with a historical record of racially-motivated marginalisation and exclusion of the majority. During the apartheid era the country was run along pieces of legislation and policies based on racial segregation. The system held a tight clamp on interracial mixing which forced people to remain in segregated areas. For example, a citizen from the Indian community could not own property in an area allocated to white people. In this way, a great majority of people were denied basic human rights. Now, there is a supreme constitution with an entrenched justiciable Bill of Rights founded on democratic values of social justice, human dignity, equality and the advancement of human rights and freedoms. The Constitution also enshrines the values of non-racialism and non-sexism. The Constitutional Court has the power to declare unconstitutional any law or conduct considered to be inconsistent with it. Now, more than two decades down the line, despite the abolition of apartheid, there is evidence that South Africa still experiences hate crimes which violate the entrenched right of vulnerable groups not to be discriminated against on the basis of race, sexual orientation, gender, national origin, occupation, or disability. To remedy this mischief parliament has responded by drafting the Prevention and Combatting of Hate Crimes and Hate Speech Bill. The Bill has been disseminated for public comment and suggestions. It is intended to combat hate crimes and hate speech based on sheer prejudice. The other purpose of the Bill is to bring South Africa in line with international human rights instruments against racism, racial discrimination, xenophobia and related expressions of intolerance identified in several international instruments. It is against this backdrop that this paper intends to analyse the impact of the Bill on the rights to human dignity, equality, and freedom. This study is significant because the Bill was highly contested and creates a huge debate. This study relies on a qualitative evaluative approach based on desktop and library research. The article recurs to primary and secondary sources. For comparative purpose, the paper compares South Africa with countries such as Australia, Canada, Kenya, Cuba, and United Kingdom which have criminalised hate crimes and hate speech. The finding from this study is that despite the Bill’s expressed positive intentions, this draft legislation is problematic for several reasons. The main reason is that it generates considerable controversy mostly because it is considered to infringe the right to freedom of expression. Though the author suggests that the Bill should not be rejected in its entirety, she notes the brutal psychological effect of hate crimes on their direct victims and the writer emphasises that a legislature can succeed to combat hate-crimes only if it provides for them as a separate stand-alone category of offences. In view of these findings, the study recommended that since hate speech clauses have a negative impact on freedom of expression it can be promulgated, subject to the legislature enacting the Prevention and Combatting of Hate-Crimes Bill as a stand-alone law which criminalises hate crimes.Keywords: freedom of expression, hate crimes, hate speech, human dignity
Procedia PDF Downloads 170420 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 374419 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction
Authors: M. D. Haneef, R. B. Randall, Z. Peng
Abstract:
Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction
Procedia PDF Downloads 309418 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 55417 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure
Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek
Abstract:
Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.Keywords: health system outcomes, health reforms, health system assessment, health system evaluation
Procedia PDF Downloads 289416 Identification of Genomic Mutations in Prostate Cancer and Cancer Stem Cells By Single Cell RNAseq Analysis
Authors: Wen-Yang Hu, Ranli Lu, Mark Maienschein-Cline, Danping Hu, Larisa Nonn, Toshi Shioda, Gail S. Prins
Abstract:
Background: Genetic mutations are highly associated with increased prostate cancer risk. In addition to whole genome sequencing, somatic mutations can be identified by aligning transcriptome sequences to the human genome. Here we analyzed bulk RNAseq and single cell RNAseq data of human prostate cancer cells and their matched non-cancer cells in benign regions from 4 individual patients. Methods: Sequencing raw reads were aligned to the reference genome hg38 using STAR. Variants were annotated using Annovar with respect to overlap gene annotation information, effect on gene and protein sequence, and SIFT annotation of nonsynonymous variant effect. We determined cancer-specific novel alleles by comparing variant calls in cancer cells to matched benign cells from the same individual by selecting unique alleles that were only detected in the cancer samples. Results: In bulk RNAseq data from 3 patients, the most common variants were the noncoding mutations at UTR3/UTR5, and the major variant types were single-nucleotide polymorphisms (SNP) including frameshift mutations. C>T transversion is the most frequently presented substitution of SNP. A total of 222 genes carrying unique exonic or UTR variants were revealed in cancer cells across 3 patients but not in benign cells. Among them, transcriptome levels of 7 genes (CITED2, YOD1, MCM4, HNRNPA2B1, KIF20B, DPYSL2, NR4A1) were significantly up or down regulated in cancer stem cells. Out of the 222 commonly mutated genes in cancer, 19 have nonsynonymous variants and 11 are damaged genes with variants including SIFT, frameshifts, stop gain/loss, and insertions/deletions (indels). Two damaged genes, activating transcription factor 6 (ATF6) and histone demethylase KDM3A are of particular interest; the former is a survival factor for certain cancer cells while the later positively activates androgen receptor target genes in prostate cancer. Further, single cell RNAseq data of cancer cells and their matched non-cancer benign cells from both primary 2D and 3D tumoroid cultures were analyzed. Similar to the bulk RNAseq data, single cell RNAseq in cancer demonstrated that the exonic mutations are less common than noncoding variants, with SNPs including frameshift mutations the most frequently presented types in cancer. Compared to cancer stem cell enriched-3D tumoroids, 2D cancer cells carried 3-times higher variants, 8-times more coding mutations and 10-times more nonsynonymous SNP. Finally, in both 2D primary and 3D tumoroid cultures, cancer stem cells exhibited fewer coding mutations and noncoding SNP or insertions/deletions than non-stem cancer cells. Summary: Our study demonstrates the usefulness of bulk and single cell RNAseaq data in identifying somatic mutations in prostate cancer, providing an alternative method in screening candidate genes for prostate cancer diagnosis and potential therapeutic targets. Cancer stem cells carry fewer somatic mutations than non-stem cancer cells due to their inherited immortal stand DNA from parental stem cells that explains their long-lived characteristics.Keywords: prostate cancer, stem cell, genomic mutation, RNAseq
Procedia PDF Downloads 16415 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix
Authors: Natia Jalagonia, Tinatin Kuchukhidze
Abstract:
Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculatedKeywords: synthesis, PMHS, membrane, electrolyte
Procedia PDF Downloads 256414 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 249