Search results for: large dams
5659 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 145658 Management of Gap Non-Union Following Tumour Resection of the Distal Femur
Authors: Rajendra Kumar Kanojia
Abstract:
Correction of the gap created by the resection of large juxtra-articular tumours of the femur would be difficult to manage, several bone substitutes, bone grafts, and artificial bone granules were tried but the results were not as good as with the distraction osteogensis, by the help of either Ilizarov ring fixator or the mono-rail fixators. We are presenting a small study of five cases of malignant tumours of the distal femur, removed, custom made mega prosthesis was applied and that failed twice in a span of five years. We had no better option left then to apply mono-rail fixator, and start the process of distraction osteogeneis, we got the union, gap was filled with new bone and patient has been made walking in few months.Keywords: distal femur tumour, resection, defect non-union, mono-rail fixator
Procedia PDF Downloads 3755657 Resource Assessment of Animal Dung for Power Generation: A Case Study
Authors: Gagandeep Kaur, Yadwinder Singh Brar, D. P. Kothari
Abstract:
The paper has an aggregate analysis of animal dung for converting it into renewable biomass fuel source that could be used to help the Indian state Punjab to meet rising power demand. In Punjab district Bathinda produces over 4567 tonnes of animal dung daily on a renewable basis. The biogas energy potential has been calculated using values for the daily per head animal dung production and total no. of large animals in Bathinda of Punjab. The 379540 no. of animals in district could produce nearly 116918 m3 /day of biogas as renewable energy. By converting this biogas into electric energy could produce 89.8 Gwh energy annually.Keywords: livestock, animal dung, biogas, renewable energy
Procedia PDF Downloads 5115656 Proactive Approach to Innovation Management
Authors: Andrus Pedai, Igor Astrov
Abstract:
The focus of this paper is to compare common approaches for Systems of Innovation (SI) and identify proactive alternatives for driving the innovation. Proactive approaches will also consider short and medium term perspectives with developments in the field of Computer Technology and Artificial Intelligence. Concerning computer technology and large connected information systems, it is reasonable to predict that during current or the next century, intelligence and innovation will be separated from the constraints of human-driven management. After this happens, humans will no longer be driving the innovation and there is possibility that SI for new intelligent systems will set its own targets and exclude humans. Over long time scale, these developments could result in a scenario, which will lead to the development of larger, cross galactic (universal) proactive SI and Intelligence.Keywords: artificial intelligence, DARPA, Moore’s law, proactive innovation, singularity, systems of innovation
Procedia PDF Downloads 4795655 A Research Analysis on the Source Technology and Convergence Types
Authors: Kwounghee Choi
Abstract:
Technological convergence between the various sectors is expected to have a very large impact on future industrial and economy. This study attempts to do empirical approach between specific technologies’ classification. For technological convergence classification, it is necessary to set the target technology to be analyzed. This study selected target technology from national research and development plan. At first we found a source technology for analysis. Depending on the weight of source technology, NT-based, BT-based, IT-based, ET-based, CS-based convergence types were classified. This study aims to empirically show the concept of convergence technology and convergence types. If we use the source technology to classify convergence type, it will be useful to make practical strategies of convergence technology.Keywords: technology convergence, source technology, convergence type, R&D strategy, technology classification
Procedia PDF Downloads 4855654 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects
Authors: Rabea Sefrin, Christian Glock, Juergen Schnell
Abstract:
The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation
Procedia PDF Downloads 1185653 Smart Grids Cyber Security Issues and Challenges
Authors: Imen Aouini, Lamia Ben Azzouz
Abstract:
The energy need is growing rapidly due to the population growth and the large new usage of power. Several works put considerable efforts to make the electricity grid more intelligent to reduce essentially energy consumption and provide efficiency and reliability of power systems. The Smart Grid is a complex architecture that covers critical devices and systems vulnerable to significant attacks. Hence, security is a crucial factor for the success and the wide deployment of Smart Grids. In this paper, we present security issues of the Smart Grid architecture and we highlight open issues that will make the Smart Grid security a challenging research area in the future.Keywords: smart grids, smart meters, home area network, neighbor area network
Procedia PDF Downloads 4245652 Critical Success Factors of Information Technology Projects
Authors: Athar Imtiaz, Abduljalil S. Al-Mudhary, Taha Mirhashemi, Roslina Ibrahim
Abstract:
Information Technology (IT) is being used by almost all organizations throughout the world. However, its success at supporting and improving business is debatable. There is always the risk of IT project failure and studies have proven that a large number of IT projects indeed do fail. There are many components that further the success of IT projects; these have been studied in previous studies. Studies have found the most necessary components for success in software development projects, executive information systems etc. In this study, previous literature that has looked into these success promoting factors have been critically reviewed and analyzed. Fifteen critical Success Factors (CSF) of IT projects were enlisted and examined. These factors can be applied to all IT projects and is not specific to a particular type of IT/IS project. A hypothesis was also generated after the evaluation of the factors.Keywords: critical success factors, CSF, IT projects, IS projects, software development projects
Procedia PDF Downloads 4005651 Influence of Sewage Sludge on Agricultural Land Quality and Crop
Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu
Abstract:
Since the accumulation of large quantities of sewage sludge is producing serious environmental problems, numerous environmental specialists are looking for solutions to solve this problem. The sewage sludge obtained by treatment of municipal wastewater may be used as fertiliser on agricultural soils because such sludge contains large amounts of nitrogen, phosphorus and organic matter. In many countries, sewage sludge is used instead of chemical fertilizers in agriculture, this being the most feasible method to reduce the increasingly larger quantities of sludge. The use of sewage sludge on agricultural soils is allowed only with a strict monitoring of their physical and chemical parameters, because heavy metals exist in varying amounts in sewage sludge. Exceeding maximum permitted quantities of harmful substances may lead to pollution of agricultural soil and may cause their removal aside because the plants may take up the heavy metals existing in soil and these metals will most probably be found in humans and animals through food. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), total organic carbon (TOC) (mg L⁻¹), N-total (mg L⁻¹), P-total (mg L⁻¹), N-NH₄ (mg L⁻¹), N-NO₂ (mg L⁻¹), N-NO₃ (mg L⁻¹), Fe-total (mg L⁻¹), Cr-total (mg L⁻¹), Cu (mg L⁻¹), Zn (mg L⁻¹), Cd (mg L⁻¹), Pb (mg L⁻¹), Ni (mg L⁻¹). The determination methods were electrometrical (pH, C, TSD) - with a portable HI 9828 HANNA electrodes committed multiparameter and spectrophotometric - with a Spectroquant NOVA 60 - Merck spectrophotometer and with specific Merck parameter kits. The tests made pointed out the fact that the sludge analysed is low heavy metal falling within the legal limits, the quantities of metals measured being much lower than the maximum allowed. The results of the tests made to determine the content of nutrients in the sewage sludge have shown that the existing nutrients may be used to increase the fertility of agricultural soils. Other tests were carried out on lands where sewage sludge was applied in order to establish the maximum quantity of sludge that may be used so as not to constitute a source of pollution. The tests were made on three plots: a first batch with no mud and no chemical fertilizers applied, a second batch on which only sewage sludge was applied, and a third batch on which small amounts of chemical fertilizers were applied in addition to sewage sludge. The results showed that the production increases when the soil is treated with sludge and small amounts of chemical fertilizers. Based on the results of the present research, a fertilization plan has been suggested. This plan should be reconsidered each year based on the crops planned, the yields proposed, the agrochemical indications, the sludge analysis, etc.Keywords: agricultural use, crops, physico–chemical parameters, sewage sludge
Procedia PDF Downloads 2905650 Effect of Boric Acid Content on the Structural and Optical Properties of In2O3 Films Prepared by Spray Pyrolysis Technique
Authors: Mustafa Öztas, Metin Bedir, Yahya Özdemir
Abstract:
Boron doped of In2O3 films were prepared by spray pyrolysis technique at 350 °C substrate temperature, which is a low cost and large area technique to be well-suited for the manufacture of solar cells, using boric acid (H3BO3) as dopant source, and their properties were investigated as a function of doping concentration. X-ray analysis showed that the films were polycrystalline fitting well with a hexagonal structure and have preferred orientation in (220) direction. The changes observed in the energy band gap and structural properties of the films related to the boric acid concentration are discussed in detail.Keywords: spray pyrolysis, In2O3, boron, optical properties, boric acid
Procedia PDF Downloads 5875649 Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching
Authors: Weitao Lin
Abstract:
To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method.Keywords: natural language processing, Chinese event detection, rules matching, dependency parsing
Procedia PDF Downloads 1415648 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 1745647 Robust Barcode Detection with Synthetic-to-Real Data Augmentation
Authors: Xiaoyan Dai, Hsieh Yisan
Abstract:
Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.Keywords: barcode detection, data augmentation, deep learning, image-based processing
Procedia PDF Downloads 1695646 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences
Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan
Abstract:
Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement
Procedia PDF Downloads 3065645 The Next Generation Neutrinoless Double-Beta Decay Experiment nEXO
Authors: Ryan Maclellan
Abstract:
The nEXO Collaboration is designing a very large detector for neutrinoless double beta decay of Xe-136. The nEXO detector is rooted in the current EXO-200 program, which has reached a sensitivity for the half-life of the decay of 1.9x10^25 years with an exposure of 99.8 kg-y. The baseline nEXO design assumes 5 tonnes of liquid xenon, enriched in the mass 136 isotope, within a time projection chamber. The detector is being designed to reach a half-life sensitivity of > 5x10^27 years covering the inverted neutrino mass hierarchy, with 5 years of data. We present the nEXO detector design, the current status of R&D efforts, and the physics case for the experiment.Keywords: double-beta, Majorana, neutrino, neutrinoless
Procedia PDF Downloads 4235644 Analytical Solution for Stellar Distance Based on Photon Dominated Cosmic Expansion Model
Authors: Xiaoyun Li, Suoang Longzhou
Abstract:
This paper derives the analytical solution of stellar distance according to its redshift based on the photon-dominated universe expansion model. Firstly, it calculates stellar separation speed and the farthest distance of observable stars via simulation. Then the analytical solution of stellar distance according to its redshift is derived. It shows that when the redshift is large, the stellar distance (and its separation speed) is not proportional to its redshift due to the relativity effect. It also reveals the relationship between stellar age and its redshift. The correctness of the analytical solution is verified by the latest astronomic observations of Ia supernovas in 2020.Keywords: redshift, cosmic expansion model, analytical solution, stellar distance
Procedia PDF Downloads 1615643 ARGO: An Open Designed Unmanned Surface Vehicle Mapping Autonomous Platform
Authors: Papakonstantinou Apostolos, Argyrios Moustakas, Panagiotis Zervos, Dimitrios Stefanakis, Manolis Tsapakis, Nektarios Spyridakis, Mary Paspaliari, Christos Kontos, Antonis Legakis, Sarantis Houzouris, Konstantinos Topouzelis
Abstract:
For years unmanned and remotely operated robots have been used as tools in industry research and education. The rapid development and miniaturization of sensors that can be attached to remotely operated vehicles in recent years allowed industry leaders and researchers to utilize them as an affordable means for data acquisition in air, land, and sea. Despite the recent developments in the ground and unmanned airborne vehicles, a small number of Unmanned Surface Vehicle (USV) platforms are targeted for mapping and monitoring environmental parameters for research and industry purposes. The ARGO project is developed an open-design USV equipped with multi-level control hardware architecture and state-of-the-art sensors and payloads for the autonomous monitoring of environmental parameters in large sea areas. The proposed USV is a catamaran-type USV controlled over a wireless radio link (5G) for long-range mapping capabilities and control for a ground-based control station. The ARGO USV has a propulsion control using 2x fully redundant electric trolling motors with active vector thrust for omnidirectional movement, navigation with opensource autopilot system with high accuracy GNSS device, and communication with the 2.4Ghz digital link able to provide 20km of Line of Sight (Los) range distance. The 3-meter dual hull design and composite structure offer well above 80kg of usable payload capacity. Furthermore, sun and friction energy harvesting methods provide clean energy to the propulsion system. The design is highly modular, where each component or payload can be replaced or modified according to the desired task (industrial or research). The system can be equipped with Multiparameter Sonde, measuring up to 20 water parameters simultaneously, such as conductivity, salinity, turbidity, dissolved oxygen, etc. Furthermore, a high-end multibeam echo sounder can be installed in a specific boat datum for shallow water high-resolution seabed mapping. The system is designed to operate in the Aegean Sea. The developed USV is planned to be utilized as a system for autonomous data acquisition, mapping, and monitoring bathymetry and various environmental parameters. ARGO USV can operate in small or large ports with high maneuverability and endurance to map large geographical extends at sea. The system presents state of the art solutions in the following areas i) the on-board/real-time data processing/analysis capabilities, ii) the energy-independent and environmentally friendly platform entirely made using the latest aeronautical and marine materials, iii) the integration of advanced technology sensors, all in one system (photogrammetric and radiometric footprint, as well as its connection with various environmental and inertial sensors) and iv) the information management application. The ARGO web-based application enables the system to depict the results of the data acquisition process in near real-time. All the recorded environmental variables and indices are presented, allowing users to remotely access all the raw and processed information using the implemented web-based GIS application.Keywords: monitor marine environment, unmanned surface vehicle, mapping bythometry, sea environmental monitoring
Procedia PDF Downloads 1395642 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg
Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma
Abstract:
The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES
Procedia PDF Downloads 805641 Deployment of Attack Helicopters in Conventional Warfare: The Gulf War
Authors: Mehmet Karabekir
Abstract:
Attack helicopters (AHs) are usually deployed in conventional warfare to destroy armored and mechanized forces of enemy. In addition, AHs are able to perform various tasks in the deep, and close operations – intelligence, surveillance, reconnaissance, air assault operations, and search and rescue operations. Apache helicopters were properly employed in the Gulf Wars and contributed the success of campaign by destroying a large number of armored and mechanized vehicles of Iraq Army. The purpose of this article is to discuss the deployment of AHs in conventional warfare in the light of Gulf Wars. First, the employment of AHs in deep and close operations will be addressed regarding the doctrine. Second, the US armed forces AH-64 doctrinal and tactical usage will be argued in the 1st and 2nd Gulf Wars.Keywords: attack helicopter, conventional warfare, gulf wars
Procedia PDF Downloads 4735640 Sparse Principal Component Analysis: A Least Squares Approximation Approach
Authors: Giovanni Merola
Abstract:
Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement to the original Principal Components Analysis (PCA) objective function. This approach differs from others because it preserves PCA's original optimality: uncorrelatedness of the components and least squares approximation of the data. To identify the best subset of non-zero loadings we propose a branch-and-bound search and an iterative elimination algorithm. This last algorithm finds sparse solutions with large loadings and can be run without specifying the cardinality of the loadings and the number of components to compute in advance. We give thorough comparisons with the existing sparse PCA methods and several examples on real datasets.Keywords: SPCA, uncorrelated components, branch-and-bound, backward elimination
Procedia PDF Downloads 3815639 Industrial Wastewater from Paper Mills Used for Biofuel Production and Soil Improvement
Authors: Karin M. Granstrom
Abstract:
Paper mills produce wastewater with a high content of organic substances. Treatment usually consists of sedimentation, biological treatment of activated sludge basins, and chemical precipitation. The resulting sludges are currently a waste problem, deposited in landfills or used as low-grade fuels for incineration. There is a growing awareness of the need for energy efficiency and environmentally sound management of sludge. A resource-efficient method would be to digest the wastewater sludges anaerobically to produce biogas, refine the biogas to biomethane for use in the transportation sector, and utilize the resulting digestate for soil improvement. The biomethane yield of pulp and paper wastewater sludge is comparable to that of straw or manure. As a bonus, the digestate has an improved dewaterability compared to the feedstock biosludge. Limitations of this process are predominantly a weak economic viability - necessitating both sufficiently large-scale paper production for the necessary large amounts of produced wastewater sludge, and the resolving of remaining questions on the certifiability of the digestate and thus its sales price. A way to improve the practical and economical feasibility of using paper mill wastewater for biomethane production and soil improvement is to co-digest it with other feedstocks. In this study, pulp and paper sludge were co-digested with (1) silage and manure, (2) municipal sewage sludge, (3) food waste, or (4) microalgae. Biomethane yield analysis was performed in 500 ml batch reactors, using an Automatic Methane Potential Test System at thermophilic temperature, with a 20 days test duration. The results show that (1) the harvesting season of grass silage and manure collection was an important factor for methane production, with spring feedstocks producing much more than autumn feedstock, and pulp mill sludge benefitting the most from co-digestion; (2) pulp and paper mill sludge is a suitable co-substrate to add when a high nitrogen content cause impaired biogas production due to ammonia inhibition; (3) the combination of food waste and paper sludge gave higher methane yield than either of the substrates digested separately; (4) pure microalgae gave the highest methane yield. In conclusion, although pulp and paper mills are an almost untapped resource for biomethane production, their wastewater is a suitable feedstock for such a process. Furthermore, through co-digestion, the pulp and paper mill wastewater and mill sludges can aid biogas production from more nutrient-rich waste streams from other industries. Such co-digestion also enhances the soil improvement properties of the residue digestate.Keywords: anaerobic, biogas, biomethane, paper, sludge, soil
Procedia PDF Downloads 2595638 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 895637 Efficient Model Selection in Linear and Non-Linear Quantile Regression by Cross-Validation
Authors: Yoonsuh Jung, Steven N. MacEachern
Abstract:
Check loss function is used to define quantile regression. In the prospect of cross validation, it is also employed as a validation function when underlying truth is unknown. However, our empirical study indicates that the validation with check loss often leads to choosing an over estimated fits. In this work, we suggest a modified or L2-adjusted check loss which rounds the sharp corner in the middle of check loss. It has a large effect of guarding against over fitted model in some extent. Through various simulation settings of linear and non-linear regressions, the improvement of check loss by L2 adjustment is empirically examined. This adjustment is devised to shrink to zero as sample size grows.Keywords: cross-validation, model selection, quantile regression, tuning parameter selection
Procedia PDF Downloads 4385636 1/Sigma Term Weighting Scheme for Sentiment Analysis
Authors: Hanan Alshaher, Jinsheng Xu
Abstract:
Large amounts of data on the web can provide valuable information. For example, product reviews help business owners measure customer satisfaction. Sentiment analysis classifies texts into two polarities: positive and negative. This paper examines movie reviews and tweets using a new term weighting scheme, called one-over-sigma (1/sigma), on benchmark datasets for sentiment classification. The proposed method aims to improve the performance of sentiment classification. The results show that 1/sigma is more accurate than the popular term weighting schemes. In order to verify if the entropy reflects the discriminating power of terms, we report a comparison of entropy values for different term weighting schemes.Keywords: 1/sigma, natural language processing, sentiment analysis, term weighting scheme, text classification
Procedia PDF Downloads 2045635 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 1085634 Detect QOS Attacks Using Machine Learning Algorithm
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
A large majority of users favoured to wireless LAN connection since it was so simple to use. A wireless network can be the target of numerous attacks. Class hijacking is a well-known attack that is fairly simple to execute and has significant repercussions on users. The statistical flow analysis based on machine learning (ML) techniques is a promising categorization methodology. In a given dataset, which in the context of this paper is a collection of components representing frames belonging to various flows, machine learning (ML) can offer a technique for identifying and characterizing structural patterns. It is possible to classify individual packets using these patterns. It is possible to identify fraudulent conduct, such as class hijacking, and take necessary action as a result. In this study, we explore a way to use machine learning approaches to thwart this attack.Keywords: wireless lan, quality of service, machine learning, class hijacking, EDCA remapping
Procedia PDF Downloads 615633 Exploration of in-situ Product Extraction to Increase Triterpenoid Production in Saccharomyces Cerevisiae
Authors: Mariam Dianat Sabet Gilani, Lars M. Blank, Birgitta E. Ebert
Abstract:
Plant-derived lupane-type, pentacyclic triterpenoids are biologically active compounds that are highly interesting for applications in medical, pharmaceutical, and cosmetic industries. Due to the low abundance of these valuable compounds in their natural sources, and the environmentally harmful downstream process, alternative production methods, such as microbial cell factories, are investigated. Engineered Saccharomyces cerevisiae strains, harboring the heterologous genes for betulinic acid synthesis, can produce up to 2 g L-1 triterpenoids, showing high potential for large-scale production of triterpenoids. One limitation of the microbial synthesis is the intracellular product accumulation. It not only makes cell disruption a necessary step in the downstream processing but also limits productivity and product yield per cell. To overcome these restrictions, the aim of this study is to develop an in-situ extraction method, which extracts triterpenoids into a second organic phase. Such a continuous or sequential product removal from the biomass keeps the cells in an active state and enables extended production time or biomass recycling. After screening of twelve different solvents, selected based on product solubility, biocompatibility, as well as environmental and health impact, isopropyl myristate (IPM) was chosen as a suitable solvent for in-situ product removal from S. cerevisiae. Impedance-based single-cell analysis and off-gas measurement of carbon dioxide emission showed that cell viability and physiology were not affected by the presence of IPM. Initial experiments demonstrated that after the addition of 20 vol % IPM to cultures in the stationary phase, 40 % of the total produced triterpenoids were extracted from the cells into the organic phase. In future experiments, the application of IPM in a repeated batch process will be tested, where IPM is added at the end of each batch run to remove triterpenoids from the cells, allowing the same biocatalysts to be used in several sequential batch steps. Due to its high biocompatibility, the amount of IPM added to the culture can also be increased to more than 20 vol % to extract more than 40 % triterpenoids in the organic phase, allowing the cells to produce more triterpenoids. This highlights the potential for the development of a continuous large-scale process, which allows biocatalysts to produce intracellular products continuously without the necessity of cell disruption and without limitation of the cell capacity.Keywords: betulinic acid, biocompatible solvent, in-situ extraction, isopropyl myristate, process development, secondary metabolites, triterpenoids, yeast
Procedia PDF Downloads 1535632 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N
Authors: Oindrila Nath, S. Sridharan
Abstract:
Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature
Procedia PDF Downloads 4105631 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes
Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov
Abstract:
Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography
Procedia PDF Downloads 3265630 Audio-Visual Entrainment and Acupressure Therapy for Insomnia
Authors: Mariya Yeldhos, G. Hema, Sowmya Narayanan, L. Dhiviyalakshmi
Abstract:
Insomnia is one of the most prevalent psychological disorders worldwide. Some of the deficiencies of the current treatments of insomnia are: side effects in the case of sleeping pills and high costs in the case of psychotherapeutic treatment. In this paper, we propose a device which provides a combination of audio visual entrainment and acupressure based compression therapy for insomnia. This device provides drug-free treatment of insomnia through a user friendly and portable device that enables relaxation of brain and muscles, with certain advantages such as low cost, and wide accessibility to a large number of people. Tools adapted towards the treatment of insomnia: -Audio -Continuous exposure to binaural beats of a particular frequency of audible range -Visual -Flash of LED light -Acupressure points -GB-20 -GV-16 -B-10Keywords: insomnia, acupressure, entrainment, audio-visual entrainment
Procedia PDF Downloads 429