Search results for: point cloud imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6723

Search results for: point cloud imaging

4563 Quantitative Comparisons of Different Approaches for Rotor Identification

Authors: Elizabeth M. Annoni, Elena G. Tolkacheva

Abstract:

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.

Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors

Procedia PDF Downloads 324
4562 Perception of Customers towards Service Quality: A Comparative Analysis of Organized and Unorganised Retail Stores (with Special Reference to Bhopal City)

Authors: Abdul Rashid, Varsha Rokade

Abstract:

Service Quality within retail units is pivotal for satisfying customers and retaining them. This study on customer perception towards Service Quality variables in Retail aims to identify the dimensions and their impact on customers. An analytical study of the different retail service quality variables was done to understand the relationship between them. The study tries exploring the factors that attract the customers towards the organised and unorganised retail stores in the capital city of Madhya Pradesh, India. As organised retailers are seen as offering similar products in the outlets, improving service quality is seen as critical to ensuring competitive advantage over unorganised retailers. Data were collected through a structured questionnaire on a five-point Likert scale from existing walk-in customers of selected organised and unorganised retail stores in Bhopal City of Madhya Pradesh, India. The data was then analysed by factor analysis using (SPSS) Statistical Package for the Social Sciences especially Percentage analysis, ANOVA and Chi-Square. This study tries to find interrelationship between various Retail Service Quality dimensions, which will help the retailers to identify the steps needed to improve the overall quality of service. Thus, the findings of the study prove to be helpful in understanding the service quality variables which should be considered by organised and unorganised retail stores in Capital city of Madhya Pradesh, India.Also, findings of this empirical research reiterate the point of view that dimensions of Service Quality in Retail play an important role in enhancing customer satisfaction – a sector with high growth potential and tremendous opportunities in rapidly growing economies like India’s. With the introduction of FDI in multi-brand retailing, a large number of international retail players are expected to enter the Indian market, this intern will bring more competition in the retail sector. For benchmarking themselves with global standards, the Indian retailers will have to improve their service quality.

Keywords: organized retail, unorganised retail, retail service quality, service quality dimension

Procedia PDF Downloads 230
4561 Monitor Vehicle Speed Using Internet of Things Based Wireless Sensor Network System

Authors: Akber Oumer Abdurezak

Abstract:

Road traffic accident is a major problem in Ethiopia, resulting in the deaths of many people and potential injuries and crash every year and loss of properties. According to the Federal Transport Authority, one of the main causes of traffic accident and crash in Ethiopia is over speeding. Implementation of different technologies is used to monitor the speed of vehicles in order to minimize accidents and crashes. This research aimed at designing a speed monitoring system to monitor the speed of travelling vehicles and movements, reporting illegal speeds or overspeeding vehicles to the concerned bodies. The implementation of the system is through a wireless sensor network. The proposed system can sense and detect the movement of vehicles, process, and analysis the data obtained from the sensor and the cloud system. The data is sent to the central controlling server. The system contains accelerometer and gyroscope sensors to sense and collect the data of the vehicle. Arduino to process the data and Global System for Mobile Communication (GSM) module for communication purposes to send the data to the concerned body. When the speed of the vehicle exceeds the allowable speed limit, the system sends a message to database as “over speeding”. Both accelerometer and gyroscope sensors are used to collect acceleration data. The acceleration data then convert to speed, and the corresponding speed is checked with the speed limit, and those above the speed limit are reported to the concerned authorities to avoid frequent accidents. The proposed system decreases the occurrence of accidents and crashes due to overspeeding and can be used as an eye opener for the implementation of other intelligent transport system technologies. This system can also integrate with other technologies like GPS and Google Maps to obtain better output.

Keywords: accelerometer, IOT, GSM, gyroscope

Procedia PDF Downloads 75
4560 Healthcare Big Data Analytics Using Hadoop

Authors: Chellammal Surianarayanan

Abstract:

Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.

Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare

Procedia PDF Downloads 413
4559 Light Weight Fly Ash Based Composite Material for Thermal Insulation Applications

Authors: Bharath Kenchappa, Kunigal Shivakumar

Abstract:

Lightweight, low thermal conductivity and high temperature resistant materials or the system with moderate mechanical properties and capable of taking high heating rates are needed in both commercial and military applications. A single material with these attributes is very difficult to find and one needs to come with innovative ideas to make such material system using what is available. To bring down the cost of the system, one has to be conscious about the cost of basic materials. Such a material system can be called as the thermal barrier system. This paper focuses on developing, testing and characterization of material system for thermal barrier applications. The material developed is porous, low density, low thermal conductivity of 0.1062 W/m C and glass transition temperature about 310 C. Also, the thermal properties of the developed material was measured in both longitudinal and thickness direction to highlight the fact that the material shows isotropic behavior. The material is called modified Eco-Core which uses only less than 9% weight of high-char resin in the composite. The filler (reinforcing material) is a component of fly ash called Cenosphere, they are hollow micro-bubbles made of ceramic materials. Special mixing-technique is used to surface coat the fillers with a thin layer of resin to develop a point-to-point contact of particles. One could use commercial ceramic micro-bubbles instead of Cenospheres, but it is expensive. The bulk density of Cenospheres is about 0.35 g/cc and we could accomplish the composite density of about 0.4 g/cc. One percent filler weight of 3mm length standard drywall grade fibers was used to bring the added toughness. Both thermal and mechanical characterization was performed and properties are documented. For higher temperature applications (up to 1,000 C), a hybrid system was developed using an aerogel mat. Properties of combined material was characterized and documented. Thermal tests were conducted on both the bare modified Eco-Core and hybrid materials to assess the suitability of the material to a thermal barrier application. The hybrid material system was found to meet the requirement of the application.

Keywords: aerogel, fly ash, porous material, thermal barrier

Procedia PDF Downloads 111
4558 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network

Authors: Ziying Wu, Danfeng Yan

Abstract:

Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.

Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network

Procedia PDF Downloads 118
4557 Defects Analysis, Components Distribution, and Properties Simulation in the Fuel Cells and Batteries by 2D and 3D Characterization Techniques

Authors: Amir Peyman Soleymani, Jasna Jankovic

Abstract:

The augmented demand of the clean and renewable energy has necessitated the fuel cell and battery industries to produce more efficient devices at the lower prices, which can be achieved through the improvement of the electrode. Microstructural characterization, as one of the main materials development tools, plays a pivotal role in the production of better clean energy devices. In this study, methods for characterization and studying of the defects and components distribution were performed on the polymer electrolyte membrane fuel cell (PEMFC) and Li-ion battery (LIB) electrodes in 2D and 3D. The particles distribution, porosity, mechanical defects, and component distribution were studied by Scanning Electron Microscope (SEM), SEM-Focused Ion Beam (SEM-FIB), and Scanning Transmission Electron Microscope equipped with Energy Dispersive Spectroscopy (STEM-EDS). The 3D results obtained from X-ray Computed Tomography (XCT) revealed the pathways for electron and ion conductivity and defects progression maps. Computer-aided methods (Avizo) were employed to simulate the properties and performance of the microstructure in the electrodes. The suggestions were provided to improve the performance of PEMFCs and LIBs by adjusting the microstructure and the distribution of the components in the electrodes.

Keywords: PEM fuel cells, Li-ion batteries, 2D and 3D imaging, materials characterizations

Procedia PDF Downloads 154
4556 Development of a Microfluidic Device for Low-Volume Sample Lysis

Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman

Abstract:

We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.

Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet

Procedia PDF Downloads 79
4555 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 197
4554 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 77
4553 Embodied Spirituality in Gestalt Therapy

Authors: Silvia Alaimo

Abstract:

This lecture brings to our attention the theme of spirituality within Gestalt therapy’s theoretical and clinical perspectives and which is closely connected to the fertile emptiness and creative indifference’ experiences. First of all, the premise that must be done is the overcoming traditional western culture’s philosophical and religious misunderstandings, such as the dicotomy between spirituality and pratical/material daily life, as well as the widespread secular perspective of classic psychology. Even fullness and emptiness have traditionally been associated with the concepts of being and not being. "There is only one way through which we can contact the deepest layers of our existence, rejuvenate our thinking and reach intuition (the harmony of thought and being): inner silence" (Perls) *. Therefore, "fertile void" doesn't mean empty in itself, but rather an useful condition of every creative and responsible act, making room for a deeper dimension close to spirituality. Spirituality concerns questions about the meaning of existence, which lays beyond the concrete and literal dimension, looking for the essence of things, and looking at the value of personal experience. Looking at fundamentals of Gestalt epistemology, phenomenology, aesthetics, and the relationship, we can reach the heart of a therapeutic work that takes spiritual contours and which are based on an embodied (incarnate size), through the relational aesthetic knowledge (Spagnuolo Lobb ), the deep contact with each other, the role of compassion and responsibility, as the patient's recognition criteria (Orange, 2013) rooted in the body. The aesthetic dimension, like the spiritual dimension to which it is often associated, is a subtle dimension: it is the dimension of the essence of things, of their "soul." In clinical practice, it implies that the relationship between therapist and patient is "in the absence of judgment," also called "zero point of creative indifference," expressed by ‘therapeutic mentality’. It consists in following with interest and authentic curiosity where the patient wants to go and support him in his intentionality of contact. It’s a condition of pure and simple awareness, of the full acceptance of "what is," a moment of detachment from one's own life in which one does not take oneself too seriously, a starting point for finding a center of balance and integration that brings to the creative act, to growth, and, as Perls would say, to the excitement and adventure of living.

Keywords: spirituality, bodily, embodied aesthetics, phenomenology, relationship

Procedia PDF Downloads 137
4552 Optimization of the Self-Recognition Direct Digital Radiology Technology by Applying the Density Detector Sensors

Authors: M. Dabirinezhad, M. Bayat Pour, A. Dabirinejad

Abstract:

In 2020, the technology was introduced to solve some of the deficiencies of direct digital radiology. SDDR is an invention that is capable of capturing dental images without human intervention, and it was invented by the authors of this paper. Adjusting the radiology wave dose is a part of the dentists, radiologists, and dental nurses’ tasks during the radiology photography process. In this paper, an improvement will be added to enable SDDR to set the suitable radiology wave dose according to the density and age of the patients automatically. The separate sensors will be included in the sensors’ package to use the ultrasonic wave to detect the density of the teeth and change the wave dose. It facilitates the process of dental photography in terms of time and enhances the accuracy of choosing the correct wave dose for each patient separately. Since the radiology waves are well known to trigger off other diseases such as cancer, choosing the most suitable wave dose can be helpful to decrease the side effect of that for human health. In other words, it decreases the exposure time for the patients. On the other hand, due to saving time, less energy will be consumed, and saving energy can be beneficial to decrease the environmental impact as well.

Keywords: dental direct digital imaging, environmental impacts, SDDR technology, wave dose

Procedia PDF Downloads 194
4551 The Impact of Urbanisation on Sediment Concentration of Ginzo River in Katsina City, Katsina State, Nigeria

Authors: Ahmed A. Lugard, Mohammed A. Aliyu

Abstract:

This paper studied the influence of urban development and its accompanied land surface transformation on sediment concentration of a natural flowing Ginzo river across the city of Katsina. An opposite twin river known as Tille river, which is less urbanized, was used to compare the result of the sediment concentration of the Ginzo River in order to ascertain the consequences of the urban area on impacting the sediment concentration. An instrument called USP 61 point integrating cable way sampler described by Gregory and walling (1973), was used to collect the suspended sediment samples in the wet season months of June, July, August and September. The result obtained in the study shows that only the sample collected at the peripheral site of the city, which is mostly farmland areas resembles the results in the four sites of Tille river, which is the reference stream in the study. It was found to be only + 10% different from one another, while at the other three sites of the Ginzo which are highly urbanized the disparity ranges from 35-45% less than what are obtained at the four sites of Tille River. In the generalized assessment, the t-distribution result applied to the two set of data shows that there is a significant difference between the sediment concentration of urbanized River Ginzo and that of less urbanized River Tille. The study further discovered that the less sediment concentration found in urbanized River Ginzo is attributed to concretization of surfaced, tarred roads, concretized channeling of segments of the river including the river bed and reserved open grassland areas, all within the catchments. The study therefore concludes that urbanization affect not only the hydrology of an urbanized river basin, but also the sediment concentration which is a significant aspect of its geomorphology. This world certainly affects the flood plain of the basin at a certain point which might be a suitable land for cultivation. It is recommended here that further studies on the impact of urbanization on River Basins should focus on all elements of geomorphology as it has been on hydrology. This would make the work rather complete as the two disciplines are inseparable from each other. The authorities concern should also trigger a more proper environmental and land use management policies to arrest the menace of land degradation and related episodic events.

Keywords: environment, infiltration, river, urbanization

Procedia PDF Downloads 318
4550 Superficial Temporal Artery Pseudoaneurysm Post Blepharoplasty: Case Report

Authors: Asaad Alhabsi, Alyaqdan Algafri

Abstract:

Aim: Reporting 83 years old man with history of left upper eyelid swelling post 4-lids blepharoplasty diagnosed based on clinical presentation and Radiological imaging with pseudoaneurysm of frontal branch of Superficial Temporal Artery post blepharoplasty. METHODS: 83 years old who presented to a Tertiary ophthalmic center with painless left upper eyelids swelling for 2 months post 4-lids blepharoplasty. Left subcutaneous, sub-brow lesion, in the supertemporal pre-septal area, large mass found and excised surgically. Then he developed recurrent larger mass twice first time treated with aspiration of blood, second time diagnosed with superficial temporal artery (STA) pseudoaneurysm of frontal branch treated with endovascular embolization. RESULTS: Pseudoaneurysm of superficial temporal artery (STA) is a rare, presenting usual post head or face trauma .literature reported few cases of such conditions post operatively, and no reported cases post blepharoplasty. CONCLUSIONS: Surgical intervention is the gold standard of treatment either directly by dissecting the aneurysmal sac and ligate both ends, or endovascular method of injecting thrombin or embolization which was done in this patient by interventional radiologist.

Keywords: superficial temporal artery, pseudoaneurysm, blepharoplasty, Oculoplasty

Procedia PDF Downloads 77
4549 Application of Seismic Refraction Method in Geotechnical Study

Authors: Abdalla Mohamed M. Musbahi

Abstract:

The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).

Keywords: application of seismic, geotechnical study, physical properties, seismic refraction

Procedia PDF Downloads 492
4548 Low Resistivity Pay Identification in Carbonate Reservoirs of Yadavaran Oilfield

Authors: Mohammad Mardi

Abstract:

Generally, the resistivity is high in oil layer and low in water layer. Yet there are intervals of oil-bearing zones showing low resistivity, high porosity, and low resistance. In the typical example, well A (depth: 4341.5-4372.0m), both Spectral Gamma Ray (SGR) and Corrected Gamma Ray (CGR) are relatively low; porosity varies from 12-22%. Above 4360 meters, the reservoir shows the conventional positive difference between deep and shallow resistivity with high resistance; below 4360m, the reservoir shows a negative difference with low resistance, especially at depths of 4362.4 meters and 4371 meters, deep resistivity is only 2Ω.m, and the CAST-V imaging map shows that there are low resistance substances contained in the pores or matrix in the reservoirs of this interval. The rock slice analysis data shows that the pyrite volume is 2-3% in the interval 4369.08m-4371.55m. A comprehensive analysis on the volume of shale (Vsh), porosity, invasion features of resistivity, mud logging, and mineral volume indicates that the possible causes for the negative difference between deep and shallow resistivities with relatively low resistance are erosional pores, caves, micritic texture and the presence of pyrite. Full-bore Drill Stem Test (DST) verified 4991.09 bbl/d in this interval. To identify and thoroughly characterize low resistivity intervals coring, Nuclear Magnetic Resonance (NMR) logging and further geological evaluation are needed.

Keywords: low resistivity pay, carbonates petrophysics, microporosity, porosity

Procedia PDF Downloads 167
4547 Understanding ASPECTS of Stroke: Interrater Reliability between Emergency Medicine Physician and Radiologist in a Rural Setup

Authors: Vineel Inampudi, Arjun Prakash, Joseph Vinod

Abstract:

Aims and Objectives: To evaluate the interrater reliability in grading ASPECTS score, between emergency medicine physician at first contact and radiologist among patients with acute ischemic stroke. Materials and Methods: We conducted a retrospective analysis of 86 acute ischemic stroke cases referred to the Department of Radiodiagnosis during November 2014 to January 2016. The imaging (plain CT scan) was performed using GE Bright Speed Elite 16 Slice CT Scanner. ASPECTS score was calculated separately by an emergency medicine physician and radiologist. Interrater reliability for total and dichotomized ASPECTS (≥ 6 and < 6) scores were assessed using statistical analysis (ICC and Cohen ĸ coefficients) on SPSS software (v17.0). Results: Interrater agreement for total and dichotomized ASPECTS was substantial (ICC 0.79 and Cohen ĸ 0.68) between the emergency physician and radiologist. Mean difference in ASPECTS between the two readers was only 0.15 with standard deviation of 1.58. No proportionality bias was detected. Bland Altman plot was constructed to demonstrate the distribution of ASPECT differences between the two readers. Conclusion: Substantial interrater agreement was noted in grading ASPECTS between emergency medicine physician at first contact and radiologist thereby confirming its robustness even in a rural setting.

Keywords: ASPECTS, computed tomography, MCA territory, stroke

Procedia PDF Downloads 236
4546 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases

Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo

Abstract:

The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.

Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis

Procedia PDF Downloads 223
4545 The Follower Robots Tested in Different Lighting Condition and Improved Capabilities

Authors: Sultan Muhammed Fatih Apaydin

Abstract:

In this study, two types of robot were examined as being pioneer robot and follower robot for improving of the capabilities of tracking robots. Robots continue to tracking each other and measurement of the follow-up distance between them is very important for improvements to be applied. It was achieved that the follower robot follows the pioneer robot in line with intended goals. The tests were applied to the robots in various grounds and environments in point of performance and necessary improvements were implemented by measuring the results of these tests.

Keywords: mobile robot, remote and autonomous control, infra-red sensors, arduino

Procedia PDF Downloads 566
4544 The Heritagisation of the Titanic Culture for Urban Regeneration Use: A Case Study of the Titanic Belfast

Authors: Yu Liang

Abstract:

The study of heritage in different contexts has been discussed during the past decades, which the relationship with other fields such as tourism, museum, and urban regeneration has also been interested in scholars. Governmental and policy attention were also fascinated by the use of heritage, which it is a ‘heritagisation’ process, to achieve certain goals because the advantage will appear in both economic development and social inclusion with suitable planning. In the case of Belfast, this city has been through tough ages due to its complicated ideology issues in the past; however, it is obvious to see the transformation through representing their Belfast heritages in tourism. Planners are willing to use this method to attract cultural tourists, investors and also residents to reborn and retrieve their confidence. One of the target topics is the establishment of Titanic Belfast that explores the culture of Titanic and the history of the shipbuilding industry in Belfast. Even though the cultural flagship brought economic and social benefit, not all of the people agreed on the vision of relaunching a sunken ship and felt proud of it. The aim of this research is to clarify the concept of a ‘heritagisation’ that it could achieve certain goals in consolidating areas, increasing local self-identity pride, and promoting tourism activities if well-planned. Moreover, to discuss the preference and the pros and cons of its practice with the Titanic culture in Belfast’s regeneration process, especially the Titanic Belfast flagship project. From the methodological point of view, a mixed incorporating qualitative point of interviews, observation, and secondary sources with different perspectives and approaches are adopted in this case study. The expected result would show that a great majority of outsiders and the planners were pleasured about the concept of Titanic Belfast’s establishment and agreed its attraction traveling to Belfast. Nevertheless, there were still an amount of locals disagree that the Titanic culture and the flagship would be representative of this city and would bring other advantages to them. In other words, some residents doubt or less likely to support the issue since they have been ignored out of the planning process. Hence, opinions are divided among 38 residents, various outsiders, and stakeholders, and their perspectives have drawn an interesting task for sustainable research in the future.

Keywords: Belfast, heritagisation, Titanic, Titanic Belfast, urban regeneration

Procedia PDF Downloads 315
4543 An Analytical Study of the Quality of Educational Administration and Management At Secondary School Level in Punjab, Pakistan

Authors: Shamim Akhtar

Abstract:

The purpose of the present research was to analyse the performance level of district administrators and school heads teachers at secondary school level. The sample of the study was head teachers and teachers of secondary schools. In survey three scales were used, two scales were for the head teachers, one five point scale was for analysing the working efficiency of educational administrators and other seven points scale was for head teachers for analysing their own performance and one another seven point rating scale similar to head teacher was for the teachers for analysing the working performance of their head teachers. The results of the head teachers’ responses revealed that the performance of their District Educational Administrators was average and for the performance efficiency of the head teachers, researcher constructed the rating scales on seven parameters of management likely academic management, personnel management, financial management, infra-structure management, linkage and interface, student’s services, and managerial excellence. Results of percentages, means, and graphical presentation on different parameters of management showed that there was an obvious difference in head teachers and teachers’ responses and head teachers probably were overestimating their efficiency; but teachers evaluated that they were performing averagely on majority statements. Results of t-test showed that there was no significance difference in the responses of rural and urban teachers but significant difference in male and female teachers’ responses showed that female head teachers were performing their responsibilities better than male head teachers in public sector schools. When efficiency of the head teachers on different parameters of management were analysed it was concluded that their efficiency on academic and personnel management was average and on financial management and on managerial excellence was highly above of average level but on others parameters like infra-structure management, linkage and interface and on students services was above of average level on most statements but highly above of average on some statements. Hence there is need to improve the working efficiency in academic management and personnel management.

Keywords: educational administration, educational management, parameters of management, education

Procedia PDF Downloads 337
4542 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: alternative conceptions, electric field of continuous charged distributions, inverse square law, levels of student understandings, superposition principle

Procedia PDF Downloads 296
4541 Map UI Design of IoT Application Based on Passenger Evacuation Behaviors in Underground Station

Authors: Meng-Cong Zheng

Abstract:

When the public space is in an emergency, how to quickly establish spatial cognition and emergency shelter in the closed underground space is the urgent task. This study takes Taipei Station as the research base and aims to apply the use of Internet of things (IoT) application for underground evacuation mobility design. The first experiment identified passengers' evacuation behaviors and spatial cognition in underground spaces by wayfinding tasks and thinking aloud, then defined the design conditions of User Interface (UI) and proposed the UI design.  The second experiment evaluated the UI design based on passengers' evacuation behaviors by wayfinding tasks and think aloud again as same as the first experiment. The first experiment found that the design conditions that the subjects were most concerned about were "map" and hoping to learn the relative position of themselves with other landmarks by the map and watch the overall route. "Position" needs to be accurately labeled to determine the location in underground space. Each step of the escape instructions should be presented clearly in "navigation bar." The "message bar" should be informed of the next or final target exit. In the second experiment with the UI design, we found that the "spatial map" distinguishing between walking and non-walking areas with shades of color is useful. The addition of 2.5D maps of the UI design increased the user's perception of space. Amending the color of the corner diagram in the "escape route" also reduces the confusion between the symbol and other diagrams. The larger volume of toilets and elevators can be a judgment of users' relative location in "Hardware facilities." Fire extinguisher icon should be highlighted. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. However, "Compass and return to present location" are less used in underground space.

Keywords: evacuation behaviors, IoT application, map UI design, underground station

Procedia PDF Downloads 207
4540 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 486
4539 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks

Authors: Adrian Ionita, Ana-Maria Ghimes

Abstract:

The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.

Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling

Procedia PDF Downloads 163
4538 Study of Information Technology Support to Knowledge Sharing in Social Enterprises

Authors: Maria Granados

Abstract:

Information technology (IT) facilitates the management of knowledge in organisations through the effective leverage of collective experience and knowledge of employees. This supports information processing needs, as well as enables and facilitates sense-making activities of knowledge workers. The study of IT support for knowledge management (KM) has been carried out mainly in larger organisations where resources and competitive conditions can trigger the use of KM. However, there is still a lack of understanding on how IT can support the management of knowledge under different organisational settings influenced by: constant tensions between social and economic objectives, more focus on sustainability than competiveness, limited resources, and high levels of democratic participation and intrinsic motivations among employees. All these conditions are presented in Social Enterprises (SEs), which are normally micro and small businesses that trade to tackle social problems, improve communities, people’s life chances, and the environment. Thus, their importance to society and economies is increasing. However, there is still a need for more understanding of how these organisations operate, perform, innovate and scale-up. This knowledge is crucial to design and provide accurate strategies to enhance the sector and increase its impact and coverage. To obtain a conceptual and empirical understanding of how IT can facilitate KM in the particular organisational conditions of SEs, a quantitative study was conducted with 432 owners and senior members of SEs in UK, underpinned by 21 interviews. The findings demonstrated how IT was supporting more the recovery and storage of necessary information in SEs, and less the collaborative work and communication among enterprise members. However, it was established that SEs were using cloud solutions, web 2.0 tools, Skype and centralised shared servers to manage informally their knowledge. The possible impediments for SEs to support themselves more on IT solutions can be linked mainly to economic and human constraints. These findings elucidate new perspectives that can contribute not only to SEs and SE supporters, but also to other businesses.

Keywords: social enterprises, knowledge management, information technology, collaboration, small firms

Procedia PDF Downloads 269
4537 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 93
4536 Optimizing Pediatric Pneumonia Diagnosis with Lightweight MobileNetV2 and VAE-GAN Techniques in Chest X-Ray Analysis

Authors: Shriya Shukla, Lachin Fernando

Abstract:

Pneumonia, a leading cause of mortality in young children globally, presents significant diagnostic challenges, particularly in resource-limited settings. This study presents an approach to diagnosing pediatric pneumonia using Chest X-Ray (CXR) images, employing a lightweight MobileNetV2 model enhanced with synthetic data augmentation. Addressing the challenge of dataset scarcity and imbalance, the study used a Variational Autoencoder-Generative Adversarial Network (VAE-GAN) to generate synthetic CXR images, improving the representation of normal cases in the pediatric dataset. This approach not only addresses the issues of data imbalance and scarcity prevalent in medical imaging but also provides a more accessible and reliable diagnostic tool for early pneumonia detection. The augmented data improved the model’s accuracy and generalization, achieving an overall accuracy of 95% in pneumonia detection. These findings highlight the efficacy of the MobileNetV2 model, offering a computationally efficient yet robust solution well-suited for resource-constrained environments such as mobile health applications. This study demonstrates the potential of synthetic data augmentation in enhancing medical image analysis for critical conditions like pediatric pneumonia.

Keywords: pneumonia, MobileNetV2, image classification, GAN, VAE, deep learning

Procedia PDF Downloads 126
4535 Physicochemical and Microbiological Assessment of Source and Stored Domestic Water from Three Local Governments in Ile-Ife, Nigeria

Authors: Mary A. Bisi-Johnson, Kehinde A. Adediran, Saheed A. Akinola, Hamzat A. Oyelade

Abstract:

Some of the main problems man contends with are the quantity (source and amount) and quality of water in Nigeria. Scarcity leads to water being obtained from various sources and microbiological contaminations of the water may thus occur between the collection point and the point of usage. Thus, this study aims to assess the general and microbiological quality of domestic water sources and household stored water used within selected areas in Ile-Ife, South-Western part of Nigeria for microbial contaminants. Physicochemical and microbiological examination were carried out on 45 source and stored water samples collected from well and spring in three different local government areas i.e. Ife east, Ife-south, and Ife-north. Physicochemical analysis included pH value, temperature, total dissolved solid, dissolved oxygen, and biochemical oxygen demand. Microbiology involved most probable number analysis, total coliform, heterotrophic plate, faecal coliform, and streptococcus count. The result of the physicochemical analysis of samples showed anomalies compared to acceptable standards with the pH value of 7.20-8.60 for stored and 6.50-7.80 for source samples as the total dissolved solids (TDS of stored 20-70mg/L, source 352-691mg/L), dissolved oxygen (DO of stored 1.60-9.60mg/L, source 1.60-4.80mg/L), biochemical oxygen demand (BOD stored 0.80-3.60mg/L, source 0.60-5.40mg/L). General microbiological quality indicated that both stored and source samples with the exception of a sample were not within acceptable range as indicated by analysis of the MPN/100ml which ranges (stored 290-1100mg/L, source 9-1100mg/L). Apart from high counts, most samples did not meet the World Health Organization standard for drinking water with the presence of some pathogenic bacteria and fungi such as Salmonella and Aspergillus spp. To annul these constraints, standard treatment methods should be adopted to make water free from contaminants. This will help identify common and likely water related infection origin within the communities and thus help guide in terms of interventions required to prevent the general populace from such infections.

Keywords: domestic, microbiology, physicochemical, quality, water

Procedia PDF Downloads 361
4534 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map

Procedia PDF Downloads 274