Search results for: private cost
676 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics
Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty
Abstract:
Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC
Procedia PDF Downloads 223675 Quantification and Detection of Non-Sewer Water Infiltration and Inflow in Urban Sewer Systems
Authors: M. Beheshti, S. Saegrov, T. M. Muthanna
Abstract:
Separated sewer systems are designed to transfer the wastewater from houses and industrial sections to wastewater treatment plants. Unwanted water in the sewer systems is a well-known problem, i.e. storm-water inflow is around 50% of the foul sewer, and groundwater infiltration to the sewer system can exceed 50% of total wastewater volume in deteriorated networks. Infiltration and inflow of non-sewer water (I/I) into sewer systems is unfavorable in separated sewer systems and can trigger overloading the system and reducing the efficiency of wastewater treatment plants. Moreover, I/I has negative economic, environmental, and social impacts on urban areas. Therefore, for having sustainable management of urban sewer systems, I/I of unwanted water into the urban sewer systems should be considered carefully and maintenance and rehabilitation plan should be implemented on these water infrastructural assets. This study presents a methodology to identify and quantify the level of I/I into the sewer system. Amount of I/I is evaluated by accurate flow measurement in separated sewer systems for specified isolated catchments in Trondheim city (Norway). Advanced information about the characteristics of I/I is gained by CCTV inspection of sewer pipelines with high I/I contribution. Achieving enhanced knowledge about the detection and localization of non-sewer water in foul sewer system during the wet and dry weather conditions will enable the possibility for finding the problem of sewer system and prioritizing them and taking decisions for rehabilitation and renewal planning in the long-term. Furthermore, preventive measures and optimization of sewer systems functionality and efficiency can be executed by maintenance of sewer system. In this way, the exploitation of sewer system can be improved by maintenance and rehabilitation of existing pipelines in a sustainable way by more practical cost-effective and environmental friendly way. This study is conducted on specified catchments with different properties in Trondheim city. Risvollan catchment is one of these catchments with a measuring station to investigate hydrological parameters through the year, which also has a good database. For assessing the infiltration in a separated sewer system, applying the flow rate measurement method can be utilized in obtaining a general view of the network condition from infiltration point of view. This study discusses commonly used and advanced methods of localizing and quantifying I/I in sewer systems. A combination of these methods give sewer operators the possibility to compare different techniques and obtain reliable and accurate I/I data which is vital for long-term rehabilitation plans.Keywords: flow rate measurement, infiltration and inflow (I/I), non-sewer water, separated sewer systems, sustainable management
Procedia PDF Downloads 335674 High Acid-Stable α-Amylase Production by Milk in Liquid Culture
Authors: Shohei Matsuo, Saki Mikai, Hiroshi Morita
Abstract:
Objectives: Shochu is a popular Japanese distilled spirits. In the production of shochu, the filamentous fungus Aspergillus kawachii has traditionally been used. A. kawachii produces two types of starch hydrolytic enzymes, α-amylase (enzymatic liquefaction) and glucoamylase (enzymatic saccharification). Liquid culture system is a relatively easy microorganism to ferment with relatively low cost of production compared for solid culture. In liquid culture system, acid-unstable α-amylase (α-A) was produced abundantly, but, acid-stable α-amylase (Aα-A) was not produced. Since there is high enzyme productivity, most in shochu brewing have been adopted by a solid culture method. In this study, therefore, we investigated production of Aα-A in liquid culture system. Materials and methods: Microorganism Aspergillus kawachii NBRC 4308 was used. The mold was cultured at 30 °C for 7~14 d to allow formation of conidiospores on slant agar medium. Liquid Culture System: A. kawachii was cultured in a 100 ml of following altered SLS medium: 1.0 g of rice flour, 0.1 g of K2HPO4, 0.1 g of KCl, 0.6 g of tryptone, 0.05 g of MgSO4・7H2O, 0.001 g of FeSO4・7H2O, 0.0003 g of ZnSO4・7H2O, 0.021 g of CaCl2, 0.33 of citric acid (pH 3.0). The pH of the medium was adjusted to the designated value with 10 % HCl solution. The cultivation was shaking at 30 °C and 200 rpm for 72 h. It was filtered to obtain a crude enzyme solution. Aα-A assay: The crude enzyme solution was analyzed. An acid-stable α-amylase activity was carried out using an α-amylase assay kit (Kikkoman Corporation, Noda, Japan). It was conducted after adding 9 ml of 100 mM acetate buffer (pH 3.0) to 1 ml of the culture product supernatant and acid treatment at 37°C for 1 h. One unit of a-amylase activity was defined as the amount of enzyme that yielded 1 mmol of 2-chloro-4-nitrophenyl 6-azide-6-deoxy-b-maltopentaoside (CNP) per minute. Results and Conclusion: We experimented with co-culture of A. kawachii and lactobacillus in order to get control of pH in altered SLS medium. However, high production of acid-stable α-amylase was not obtained. We experimented with yoghurt or milk made an addition to liquid culture. The result indicated that high production of acid-stable α-amylase (964 U/g-substrate) was obtained when milk made an addition to liquid culture. Phosphate concentration in the liquid medium was a major cause of increased acid-stable α-amylase activity. In liquid culture, acid-stable α-amylase activity was enhanced by milk, but Fats and oils in the milk were oxidized. In addition, Tryptone is not approved as a food additive in Japan. Thus, alter SLS medium added to skim milk excepting for the fats and oils in the milk instead of tryptone. The result indicated that high production of acid-stable α-amylase was obtained with the same effect as milk.Keywords: acid-stable α-amylase, liquid culture, milk, shochu
Procedia PDF Downloads 284673 Design and Manufacture of Removable Nosecone Tips with Integrated Pitot Tubes for High Power Sounding Rocketry
Authors: Bjorn Kierulf, Arun Chundru
Abstract:
Over the past decade, collegiate rocketry teams have emerged across the country with various goals: space, liquid-fueled flight, etc. A critical piece of the development of knowledge within a club is the use of so-called "sounding rockets," whose goal is to take in-flight measurements that inform future rocket design. Common measurements include acceleration from inertial measurement units (IMU's), and altitude from barometers. With a properly tuned filter, these measurements can be used to find velocity, but are susceptible to noise, offset, and filter settings. Instead, velocity can be measured more directly and more instantaneously using a pitot tube, which operates by measuring the stagnation pressure. At supersonic speeds, an additional thermodynamic property is necessary to constrain the upstream state. One possibility is the stagnation temperature, measured by a thermocouple in the pitot tube. The routing of the pitot tube from the nosecone tip down to a pressure transducer is complicated by the nosecone's structure. Commercial-off-the-shelf (COTS) nosecones come with a removable metal tip (without a pitot tube). This provides the opportunity to make custom tips with integrated measurement systems without making the nosecone from scratch. The main design constraint is how the nosecone tip is held down onto the nosecone, using the tension in a threaded rod anchored to a bulkhead below. Because the threaded rod connects into a threaded hole in the center of the nosecone tip, the pitot tube follows a winding path, and the pressure fitting is off-center. Two designs will be presented in the paper, one with a curved pitot tube and a coaxial design that eliminates the need for the winding path by routing pressure through a structural tube. Additionally, three manufacturing methods will be presented for these designs: bound powder filament metal 3D printing, stereo-lithography (SLA) 3D printing, and traditional machining. These will employ three different materials, copper, steel, and proprietary resin. These manufacturing methods and materials are relatively low cost, thus accessible to student researchers. These designs and materials cover multiple use cases, based on how fast the sounding rocket is expected to travel and how important heating effects are - to measure and to avoid melting. This paper will include drawings showing key features and an overview of the design changes necessitated by the manufacture. It will also include a look at the successful use of these nosecone tips and the data they have gathered to date.Keywords: additive manufacturing, machining, pitot tube, sounding rocketry
Procedia PDF Downloads 165672 Combining the Production of Radiopharmaceuticals with the Department of Radionuclide Diagnostics
Authors: Umedov Mekhroz, Griaznova Svetlana
Abstract:
In connection with the growth of oncological diseases, the design of centers for diagnostics and the production of radiopharmaceuticals is the most relevant area of healthcare facilities. The design of new nuclear medicine centers should be carried out from the standpoint of solving the following tasks: the availability of medical care, functionality, environmental friendliness, sustainable development, improving the safety of drugs, the use of which requires special care, reducing the rate of environmental pollution, ensuring comfortable conditions for the internal microclimate, adaptability. The purpose of this article is to substantiate architectural and planning solutions, formulate recommendations and principles for the design of nuclear medicine centers and determine the connections between the production and medical functions of a building. The advantages of combining the production of radiopharmaceuticals and the department of medical care: less radiation activity is accumulated, the cost of the final product is lower, and there is no need to hire a transport company with a special license for transportation. A medical imaging department is a structural unit of a medical institution in which diagnostic procedures are carried out in order to gain an idea of the internal structure of various organs of the body for clinical analysis. Depending on the needs of a particular institution, the department may include various rooms that provide medical imaging using radiography, ultrasound diagnostics, and the phenomenon of nuclear magnetic resonance. The production of radiopharmaceuticals is an object intended for the production of a pharmaceutical substance containing a radionuclide and intended for introduction into the human body or laboratory animal for the purpose of diagnosis, evaluation of the effectiveness of treatment, or for biomedical research. The research methodology includes the following subjects: study and generalization of international experience in scientific research, literature, standards, teaching aids, and design materials on the topic of research; An integrated approach to the study of existing international experience of PET / CT scan centers and the production of radiopharmaceuticals; Elaboration of graphical analysis and diagrams based on the system analysis of the processed information; Identification of methods and principles of functional zoning of nuclear medicine centers. The result of the research is the identification of the design principles of nuclear medicine centers with the functions of the production of radiopharmaceuticals and the department of medical imaging. This research will be applied to the design and construction of healthcare facilities in the field of nuclear medicine.Keywords: architectural planning solutions, functional zoning, nuclear medicine, PET/CT scan, production of radiopharmaceuticals, radiotherapy
Procedia PDF Downloads 89671 Evaluation of the Pathogenicity Test of Some Entomopathogenic Fungus Isolates against Tomato Leaf Miner Tuta Absoluta (Meyrick) Larvae [Lepidoptera: Gelechiidae])
Authors: Tadesse Kebede, Orkun Baris Kovanci
Abstract:
Tomatoes leaf minor (Tutaabasoluta) is one of the most economically important insect pest in tomatoes production. The use of biological control such as entomopathogen fungi isolates would be a long-term and cost-effective solution to control insects pest. Therefore, identifying the most virulent and pathogenic entomopathogen fungi is one of the basic requirements for effective management options to combat Tomatoes leaf minor (Tutaabasoluta). Furthermore, the pathogenicity and virulence difference among entomopathogenfungus strains is not widely well investıgated. The current study was therefore initiated to test the pathogenicity of some entomopathogenic fungus isolates against Tutaabsoluta. The experiment was conducted at Bursa Uludag University, Agiculutre faculty, horticulture department glasshouse in 2020/2021. Tutabasoluta adult were collected, and masslarvae were reared in a growth chamber. Then, ten third instar larvae were inoculated with four entomopathogen fungi isolates (Beuaveriabassania Ak-10, Beuaveriabassania Ak-14, Metarhziumanisoplai Ak-11, and Metarhziumanisoplai Ak-12) with different inoculum suspension (0, 1x10⁶, 1x10⁷,,4 × 10⁸, 4× 10⁹ and 1×10¹⁰ conidia /ml) in a factorial experiment arranged in randomized complete block design with three replication. Mortality data assessment was done on the 3rd, 5thand 7th days after treatment and analyzed. The analysis of variance for mortality rate revealed significant variations (p<0.05) among entomoptahogen fungi isolates and conidia concentrations. The results revealed thatMetarhziumanisoplai Ak-12was found to show the lowest mortality percentage80.77%, highest LC50 2.3x108, and the longest incubation period, LT50, 4.9 and LT90, 9.9daysand considered to be less pathogenic fungi. On the other hand, Beuaveriabassania Ak-10 isolate showed the highest mortality percentage, 91%, and the lowest LT50, 4, and LT90, 7.6 values at 1×10¹⁰ conidia /ml, followed by Beuaveriabassania Ak-14 and being considered as the most aggressive bio-agent. Metarhziumanisoplai Ak-11 was determined as moderately virulent, having a mortality rate 27-81%. Results also revealed that among conidia concentrations, 1x10⁹ and 1x10¹⁰ suspensions is the most effective, while 1x10⁶ conidia/ml concentration is the least effective. Hence, results indicated that EPF tested were effective against T. absoluta larvae. As the current work revealed the potential variation among entomopathogen fungi isolates and concentration against third instar larvae.Keywords: tuta absoluta, tomato, metarhizium anisopliae, beauveria bassiana, biological control
Procedia PDF Downloads 130670 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 151669 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna
Authors: Gurkirandeep Kaur, Rana Pratap Yadav
Abstract:
This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave
Procedia PDF Downloads 119668 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application
Authors: D. Berdous, H. Ferfera-Harrar
Abstract:
Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.Keywords: antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent Hydrogel
Procedia PDF Downloads 247667 Development of a Bioprocess Technology for the Production of Vibrio midae, a Probiotic for Use in Abalone Aquaculture
Authors: Ghaneshree Moonsamy, Nodumo N. Zulu, Rajesh Lalloo, Suren Singh, Santosh O. Ramchuran
Abstract:
The abalone industry of South Africa is under severe pressure due to illegal harvesting and poaching of this seafood delicacy. These abalones are harvested excessively; as a result, these animals do not have a chance to replace themselves in their habitats, ensuing in a drastic decrease in natural stocks of abalone. Abalone has an extremely slow growth rate and takes approximately four years to reach a size that is market acceptable; therefore, it was imperative to investigate methods to boost the overall growth rate and immunity of the animal. The University of Cape Town (UCT) began to research, which resulted in the isolation of two microorganisms, a yeast isolate Debaryomyces hansenii and a bacterial isolate Vibrio midae, from the gut of the abalone and characterised them for their probiotic abilities. This work resulted in an internationally competitive concept technology that was patented. The next stage of research was to develop a suitable bioprocess to enable commercial production. Numerous steps were taken to develop an efficient production process for V. midae, one of the isolates found by UCT. The initial stages of research involved the development of a stable and robust inoculum and the optimization of physiological growth parameters such as temperature and pH. A range of temperature and pH conditions were evaluated, and data obtained revealed an optimum growth temperature of 30ᵒC and a pH of 6.5. Once these critical growth parameters were established further media optimization studies were performed. Corn steep liquor (CSL) and high test molasses (HTM) were selected as suitable alternatives to more expensive, conventionally used growth medium additives. The optimization of CSL (6.4 g.l⁻¹) and HTM (24 g.l⁻¹) concentrations in the growth medium resulted in a 180% increase in cell concentration, a 5716-fold increase in cell productivity and a 97.2% decrease in the material cost of production in comparison to conventional growth conditions and parameters used at the onset of the study. In addition, a stable market-ready liquid probiotic product, encompassing the viable but not culturable (VBNC) state of Vibrio midae cells, was developed during the downstream processing aspect of the study. The demonstration of this technology at a full manufacturing scale has further enhanced the attractiveness and commercial feasibility of this production process.Keywords: probiotics, abalone aquaculture, bioprocess technology, manufacturing scale technology development
Procedia PDF Downloads 153666 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 295665 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study
Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell
Abstract:
Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout
Procedia PDF Downloads 179664 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach
Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes
Abstract:
Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux
Procedia PDF Downloads 167663 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 70662 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method
Authors: Berker Bayazit, Gulgun Kayakutlu
Abstract:
The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy
Procedia PDF Downloads 245661 Effect of Institutional Structure on Project Managers Performance in Construction Projects: A Case Study in Nigeria
Authors: Ebuka Valentine Iroha, Tsunemi Watanabe, Satoshi Tsuchiya
Abstract:
Project management practices play an important role in construction project performance and are one of project managers' essential key performance indicators. Previous studies have explored the poor performance of the construction industry, with project delays and cost overruns identified to contribute largely to numerous abandoned projects. These challenges are attributed to insufficient project management practices and a lack of utilization of project managers. The actual causes of inadequate project management practices and underutilization of project managers have been rarely discussed. This study tends to bridge the gap by identifying and assessing the actual causes of insufficient project management practices and underutilization of project managers. This study differs from past studies investigating the causes of poor performance by using institutional analysis methods to identify and analyze the factors influencing project management practices and proper utilization of project managers. Based on a comprehensive literature review, this study identified some factors embedded in the construction industry that influence the institutional environment and weaken the laws and regulations. These factors were used as the basis for semi-structured interview questions to investigate their impacts on project management practices and project managers. The data collected were coded into a four-level framework for institutional analysis. This method was used to analyze the interrelationships between the identified embedded factors, institutional laws and regulations, and construction organizations to understand how these influences result in the underutilization of project managers. The study found that the underutilization of project managers consists of two subsystems, including underutilization and lowering commitment. The first subsystem, corruption, political influence, religious and tribal discrimination, and organizational culture, were found to affect the institutional structure. These embedded factors weaken the industry’s governance mechanism, forcing project managers to prioritize corrupt practices over project demands. The ineffectiveness of the existing laws and regulations worsens the situation, supporting unfair working conditions and contributing to the underperformance of project managers. This situation leads to the development of the second subsystem, which is characterized by a lack of opportunities for career development and minimal incentives within construction organizations. The findings provide significant potential for addressing systemic challenges in the construction industry, particularly the underutilization of project managers and enhancing organizational support measures to improve project management practices and mitigate the adverse effects of corruption.Keywords: construction industry, project management, poor performance, embedded factors, project managers underutilization
Procedia PDF Downloads 41660 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks
Authors: Shin-Pin Tseng
Abstract:
Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG
Procedia PDF Downloads 385659 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay
Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin
Abstract:
Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.Keywords: design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay
Procedia PDF Downloads 124658 A Method to Identify the Critical Delay Factors for Building Maintenance Projects of Institutional Buildings: Case Study of Eastern India
Authors: Shankha Pratim Bhattacharya
Abstract:
In general building repair and renovation projects are minor in nature. It requires less attention as the primary cost involvement is relatively small. Although the building repair and maintenance projects look simple, it involves much complexity during execution. Many of the present research indicate that few uncertain situations are usually linked with maintenance projects. Those may not be read properly in the planning stage of the projects, and finally, lead to time overrun. Building repair and maintenance become essential and periodical after commissioning of the building. In Institutional buildings, the regular maintenance projects also include addition –alteration, modification activities. Increase in the student admission, new departments, and sections, new laboratories and workshops, up gradation of existing laboratories are very common in the institutional buildings in the developing nations like India. The project becomes very critical because it undergoes space problem, architectural design issues, structural modification, etc. One of the prime factors in the institutional building maintenance and modification project is the time constraint. Mostly it required being executed a specific non-work time period. The present research considered only the institutional buildings of the Eastern part of India to analyse the repair and maintenance project delay. A general survey was conducted among the technical institutes to find the causes and corresponding nature of construction delay factors. Five technical institutes are considered in the present study with repair, renovation, modification and extension type of projects. Construction delay factors are categorically subdivided into four groups namely, material, manpower (works), Contract and Site. The survey data are collected for the nature of delay responsible for a specific project and the absolute amount of delay through proposed and actual duration of work. In the first stage of the paper, a relative importance index (RII) is proposed for the delay factors. The occurrence of the delay factors is also judged by its frequency-severity nature. Finally, the delay factors are then rated and linked with the type of work. In the second stage, a regression analysis is executed to establish an empirical relationship between the actual time of a project and the percentage of delay. It also indicates the impact of the factors for delay responsibility. Ultimately, the present paper makes an effort to identify the critical delay factors for the repair and renovation type project in the Eastern Indian Institutional building.Keywords: delay factor, institutional building, maintenance, relative importance index, regression analysis, repair
Procedia PDF Downloads 251657 Role of Alternative Dispute Resolution (ADR) in Advancing UN-SDG 16 and Pathways to Justice in Kenya: Opportunities and Challenges
Authors: Thomas Njuguna Kibutu
Abstract:
The ability to access justice is an important facet of securing peaceful, just, and inclusive societies, as recognized by Goal 16 of the 2030 Agenda for Sustainable Development. Goal 16 calls for peace, justice, and strong institutions to promote the rule of law and access to justice at a global level. More specifically, Target 16.3 of the Goal aims to promote the rule of law at the national and international levels and ensure equal access to justice for all. On the other hand, it is now widely recognized that Alternative Dispute Resolution (hereafter, ADR) represents an efficient mechanism for resolving disputes outside the adversarial conventional court system of litigation or prosecution. ADR processes include but are not limited to negotiation, reconciliation, mediation, arbitration, and traditional conflict resolution. ADR has a number of advantages, including being flexible, cost-efficient, time-effective, and confidential, and giving the parties more control over the process and the results, thus promoting restorative justice. The methodology of this paper is a desktop review of books, journal articles, reports and government documents., among others. The paper recognizes that ADR represents a cornerstone of Africa’s, and more specifically, Kenya’s, efforts to promote inclusive, accountable, and effective institutions and achieve the objectives of goal 16. In Kenya, and not unlike many African countries, there has been an outcry over the backlog of cases that are yet to be resolved in the courts and the statistics have shown that the numbers keep on rising. While ADR mechanisms have played a major role in reducing these numbers, access to justice in the country remains a big challenge, especially to the subaltern. There is, therefore, a need to analyze the opportunities and challenges facing the application of ADR mechanisms as tools for accessing justice in Kenya and further discuss various ways in which we can overcome these challenges to make ADR an effective alternative to dispute resolution. The paper argues that by embracing ADR across various sectors and addressing existing shortcomings, Kenya can, over time, realize its vision of a more just and equitable society. This paper discusses the opportunities and challenges of the application of ADR in Kenya with a view to sharing the lessons and challenges with the wider African continent. The paper concludes that ADR mechanisms can provide critical pathways to justice in Kenya and the African continent in general but come with distinct challenges. The paper thus calls for concerted efforts of respective stakeholders to overcome these challenges.Keywords: mediation, arbitration, negotiation, reconsiliation, Traditional conflict resolution, sustainable development
Procedia PDF Downloads 33656 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning
Authors: Jan Schmitt, Sophie Fischer
Abstract:
After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.Keywords: business strategy, climate change, climate adaption, game-based learning
Procedia PDF Downloads 207655 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points
Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong
Abstract:
This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points
Procedia PDF Downloads 245654 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 168653 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems
Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia
Abstract:
The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.Keywords: cloud computing, data management, multi-tenancy, requirements, security
Procedia PDF Downloads 157652 Assessing the Contribution of Informal Buildings to Energy Inefficiency in Kenya: A Case of Mukuru Slums
Authors: Bessy Thuranira
Abstract:
Buildings, as they are designed and used, may contribute to serious environmental problems because of excessive consumption of energy and other natural resources. Buildings in the informal settlements particularly, due to their unplanned physical structure and design, have significantly contributed the global energy problematic scenario typified by high-level inefficiencies. Energy used in buildings in Africa is estimated to be the highest of the total national electricity consumption. Over the last decade, assessments of energy consumption and efficiency/inefficiency has focused on formal and modern buildings. This study seeks to go off the beaten path, by focusing on energy use in informal settlements. Operationally, it sought to establish the contribution of informal buildings in the overall energy consumption in the city and the country at large. This study was carried out in Mukuru kwa Reuben informal settlement where there is distinct manifestation of different settlement morphologies within a small locality. The research narrowed down to three villages (Mombasa, Kosovo and Railway villages) within the settlement, that were representative of the different slum housing typologies. Due to the unpredictability nature and informality in slums, this study takes a multi-methodology approach. Detailed energy audits and measurements are carried out to predict total building consumption, and document building design and envelope, typology, materials and occupancy levels. Moreover, the study uses semi-structured interviews and to access energy supply, cost, access and consumption patterns. Observations and photographs are also used to shed more light on these parameters. The study reveals the high energy inefficiencies in slum buildings mainly related to sub-standard equipment and appliances, building design and settlement layout, poor access and utilization/consumption patterns of energy. The impacts of this inefficiency are high economic burden to the poor, high levels of pollution, lack of thermal comfort and emissions to the environment. The study highlights a set of urban planning and building design principles that can be used to retrofit slums into more energy efficient settlements. The study explores principles of responsive settlement layouts/plans and appropriate building designs that use the beneficial elements of nature to achieve natural lighting, natural ventilation, and solar control to create thermally comfortable, energy efficient, and environmentally responsive buildings/settlements. As energy efficiency in informal settlements is a relatively less explored area of efficiency, it requires further research and policy recommendations, for which this paper will set a background.Keywords: energy efficiency, informal settlements, renewable energy, settlement layout
Procedia PDF Downloads 133651 Multi-Criteria Nautical Ports Capacity and Services Planning
Authors: N. Perko, N. Kavran, M. Bukljas, I. Berbic
Abstract:
This paper is a result of implemented research on proposed introduced methodology for nautical ports capacity planning by introducing a multi-criteria approach of defined criteria and impacts at the Adriatic Sea. The purpose was analysing the determinants -characteristics of infrastructure and services of nautical ports capacity allocated, especially nowadays due to COVID-19 pandemic, as crucial for the successful operation of nautical ports. Giving the importance of the defined priorities for short-term and long-term planning is essential not only in terms of the development of nautical tourism but also in terms of developing the maritime system, but unfortunately, this is not always carried out. Evaluation of the use of resources should follow from a detailed analysis of all aspects of resources bearing in mind that nautical tourism used resources in a sustainable manner and generate effects in the tourism and maritime sectors. Consequently, the identified multiplier effect of nautical tourism, which should be defined and quantified in detail, should be one of the major competitive products on the Croatian Adriatic and the Mediterranean. Research of nautical tourism is necessary to quantify the effects and required planning system development. In the future, the greatest threat to the long-term sustainable development of nautical tourism can be its further uncontrolled or unlimited and undirected development, especially under pressure markedly higher demand than supply for new moorings in the Mediterranean. Results of this implemented research are applicable to nautical ports management and decision-makers of maritime transport system development. This paper will present implemented research and obtained result-developed methodology for nautical port capacity planning -port capacity planning multi-criteria decision-making. A proposed methodological approach of multi-criteria capacity planning includes four criteria (spatial - transport, cost - infrastructure, ecological and organizational criteria, and additional services). The importance of the criteria and sub-criteria is evaluated and carried out as the basis for sensitivity analysis of the importance of the criteria and sub-criteria. Based on the analysis of the identified and quantified importance of certain criteria and sub-criteria, as well as sensitivity analysis and analysis of changes of the quantified importance, scientific and applicable results will be presented. These obtained results have practical applicability by management of nautical ports in the planning of increasing capacity and further development and for the adaptation of existing nautical ports. Obtained research is applicable and replicable in other seas, and results are especially important and useful in this COVID-19 pandemic challenging maritime development framework.Keywords: Adriatic Sea, capacity, infrastructures, maritime system, methodology, nautical ports, nautical tourism, service
Procedia PDF Downloads 190650 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 105649 Boiler Ash as a Reducer of Formaldehyde Emission in Medium-Density Fiberboard
Authors: Alexsandro Bayestorff da Cunha, Dpebora Caline de Mello, Camila Alves Corrêa
Abstract:
In the production of fiberboards, an adhesive based on urea-formaldehyde resin is used, which has the advantages of low cost, homogeneity of distribution, solubility in water, high reactivity in an acid medium, and high adhesion to wood. On the other hand, as a disadvantage, there is low resistance to humidity and the release of formaldehyde. The objective of the study was to determine the viability of adding industrial boiler ash to the urea formaldehyde-based adhesive for the production of medium-density fiberboard. The raw material used was composed of Pinus spp fibers, urea-formaldehyde resin, paraffin emulsion, ammonium sulfate, and boiler ash. The experimental plan, consisting of 8 treatments, was completely randomized with a factorial arrangement, with 0%, 1%, 3%, and 5% ash added to the adhesive, with and without the application of a catalyst. In each treatment, 4 panels were produced with density of 750 kg.m⁻³, dimensions of 40 x 40 x 1,5 cm, 12% urea formaldehyde resin, 1% paraffin emulsion and hot pressing at a temperature of 180ºC, the pressure of 40 kgf/cm⁻² for a time of 10 minutes. The different compositions of the adhesive were characterized in terms of viscosity, pH, gel time and solids, and the panels by physical and mechanical properties, in addition to evaluation using the IMAL DPX300 X-ray densitometer and formaldehyde emission by the perforator method. The results showed a significant reduction of all adhesive properties with the use of the catalyst, regardless of the treatment; while the percentage increase of ashes provided an increase in the average values of viscosity, gel time, and solids and a reduction in pH for the panels with a catalyst; for panels without catalyst, the behavior was the opposite, with the exception of solids. For the physical properties, the results of the variables of density, compaction ratio, and thickness were equivalent and in accordance with the standard, while the moisture content was significantly reduced with the use of the catalyst but without the influence of the percentage of ash. The density profile for all treatments was characteristic of medium-density fiberboard, with more compacted and dense surfaces when compared to the central layer. For thickness, the swelling was not influenced by the catalyst and the use of ash, presenting average values within the normalized parameters. For mechanical properties, the influence of ashes on the adhesive was negatively observed in the modulus of rupture from 1% and in the traction test from 3%; however, only this last property, in the percentages of 3% and 5%, were below the minimum limit of the norm. The use of catalyst and ashes with percentages of 3% and 5% reduced the formaldehyde emission of the panels; however, only the panels that used adhesive with catalyst presented emissions below 8mg of formaldehyde / 100g of the panel. In this way, it can be said that boiler ash can be added to the adhesive with a catalyst without impairing the technological properties by up to 1%.Keywords: reconstituted wood panels, formaldehyde emission, technological properties of panels, perforator
Procedia PDF Downloads 72648 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir
Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai
Abstract:
Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor
Procedia PDF Downloads 478647 Study of Mechanical Properties of Large Scale Flexible Silicon Solar Modules on the Various Substrates
Authors: M. Maleczek, Leszek Bogdan, Kazimierz Drabczyk, Agnieszka Iwan
Abstract:
Crystalline silicon (Si) solar cells are the main product in the market among the various photovoltaic technologies concerning such advantages as: material richness, high carrier mobilities, broad spectral absorption range and established technology. However, photovoltaic technology on the stiff substrates are heavier, more fragile and less cost-effective than devices on the flexible substrates to be applied in special applications. The main goal of our work was to incorporate silicon solar cells into various fabric, without any change of the electrical and mechanical parameters of devices. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. In our work, the polyamide or polyester fabrics were used as a flexible substrate in the created devices. Applied fabrics differ in tensile and tear strength. All investigated polyamide fabrics are resistant to weathering and UV, while polyester ones is resistant to ozone, water and ageing. The examined fabrics are tight at 100 cm water per 2 hours. In our work, commercial silicon solar cells with the size 156 × 156 mm were cut into nine parts (called single solar cells) by diamond saw and laser. Gap and edge after cutting of solar cells were checked by transmission electron microscope (TEM) to study morphology and quality of the prepared single solar cells. Modules with the size of 160 × 70 cm (containing about 80 single solar cells) were created and investigated by electrical and mechanical methods. Weight of constructed module is about 1.9 kg. Three types of solar cell architectures such as: -fabric/EVA/Si solar cell/EVA/film for lamination, -backsheet PET/EVA/Si solar cell/EVA/film for lamination, -fabric/EVA/Si solar cell/EVA/tempered glass, were investigated taking into consideration type of fabric and lamination process together with the size of solar cells. In investigated devices EVA, it is ethylene-vinyl acetate, while PET - polyethylene terephthalate. Depend on the lamination process and compatibility of textile with solar cell an efficiency of investigated flexible silicon solar cells was in the range of 9.44-16.64 %. Multi folding and unfolding of flexible module has no impact on its efficiency as was detected by Instron equipment. Power (P) of constructed solar module is 30 W, while voltage about 36 V. Finally, solar panel contains five modules with the polyamide fabric and tempered glass will be produced commercially for different applications (dual use).Keywords: flexible devices, mechanical properties, silicon solar cells, textiles
Procedia PDF Downloads 174