Search results for: fast motion estimation; low-complexity motion estimation
138 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 228137 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 13136 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex
Authors: Lishuang Ma
Abstract:
Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexesKeywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling
Procedia PDF Downloads 144135 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species
Authors: Helga Heylen
Abstract:
With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel
Procedia PDF Downloads 197134 Polymer Matrices Based on Natural Compounds: Synthesis and Characterization
Authors: Sonia Kudlacik-Kramarczyk, Anna Drabczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: In the preparation of polymer materials, compounds of natural origin are currently gaining more and more interest. This is particularly noticeable in the case of synthesis of materials considered for biomedical use. Then, selected material has to meet many requirements. It should be characterized by non-toxicity, biodegradability and biocompatibility. Therefore special attention is directed to substances such as polysaccharides, proteins or substances that are the basic building components of proteins, i.e. amino acids. These compounds may be crosslinked with other reagents that leads to the preparation of polymer matrices. Such amino acids as e.g. cysteine or histidine. On the other hand, previously mentioned requirements may be met by polymers obtained as a result of biosynthesis, e.g. polyhydroxybutyrate. This polymer belongs to the group of aliphatic polyesters that is synthesized by microorganisms (selected strain of bacteria) under specific conditions. It is possible to modify matrices based on given polymer with substances of various origin. Such a modification may result in the change of their properties or/and in providing the material with new features desirable in viewpoint of specific application. Described materials are synthesized using UV radiation. Process of photopolymerization is fast, waste-free and enables to obtain final products with favorable properties. Methodology: Polymer matrices have been prepared by means of photopolymerization. First step involved the preparation of solutions of particular reagents and mixing them in the appropriate ratio. Next, crosslinking agent and photoinitiator have been added to the reaction mixture and the whole was poured into the Petri dish and treated with UV radiation. After the synthesis, polymer samples were dried at room temperature and subjected to the numerous analyses aimed at the determining their physicochemical properties. Firstly, sorption properties of obtained polymer matrices have been determined. Next, mechanical properties have been characterized, i.e. tensile strength. The ability to deformation under applied stress of all prepared polymer matrices has been checked. Such a property is important in viewpoint of the application of analyzed materials e.g. as wound dressings. Wound dressings have to be elastic because depending on the location of the wound and its mobility, such a dressing has to adhere properly to the wound. Furthermore, considering the use of the materials for biomedical purposes it is essential to determine its behavior in environments simulating these ones occurring in human body. Therefore incubation studies using selected liquids have also been conducted. Conclusions: As a result of photopolymerization process, polymer matrices based on natural compounds have been prepared. These exhibited favorable mechanical properties and swelling ability. Moreover, biocompatibility in relation to simulated body fluids has been stated. Therefore it can be concluded that analyzed polymer matrices constitute an interesting materials that may be considered for biomedical use and may be subjected to the further more advanced analyses using specific cell lines.Keywords: photopolymerization, polymer matrices, simulated body fluids, swelling properties
Procedia PDF Downloads 128133 On the Road towards Effective Administrative Justice in Macedonia, Albania and Kosovo: Common Challenges and Problems
Authors: Arlinda Memetaj
Abstract:
A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of both effective public administration and administrative justice system has been for a long period of time among the most ‘important and urgent’ final strategic objectives of almost any country in the Balkans region, including Macedonia, Albania and Kosovo. Closely related to this is their common strategic goal to enter the membership in the European Union, which requires fulfilling of many criteria and standards as incorporated in EU acquis communautaire. The latter is presently done with the framework of the Stabilization and Association Agreement which each of these countries has concluded with the EU accordingly. To above aims, each of the three countries has so far adopted a huge series of legislative and strategic documents related to any aspects of their individual administrative justice system. ‘Changes and reforms’ in this field have been thus the most frequent terms being used in any of these countries. The three countries have already established their own national administrative judiciary, while permanently amending their laws on the general administrative procedure introducing thereby considerable innovations concerned. National administrative courts are expected to have crucial important role within the broader judiciary systems-related reforms of these countries; they are designed to check the legality of decisions of the state administration with the aim to guarantee an effective protection of human rights and legitimate interests of private persons through a regular, conform, fast and reasonable judicial administrative process. Further improvements in this field are presently an integral crucial part of all the relevant national strategic documents including the ones on judiciary reform and public administration reform, as adopted by each of the three countries; those strategic documents are designed among others to provide effective protection of their citizens` rights` of administrative justice. On the basis of the later, the paper finally is aimed at highlighting selective common challenges and problems of the three countries on their European road, while claiming (among others) that the current status quo situation in each of them may be overcome only if there is a proper implementation of the administrative courts decisions and a far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main methods used in this paper include the analytical and comparative ones due to the very character of the paper itself.Keywords: administrative courts , administrative justice, administrative procedure, benefit, effective administrative justice, human rights, implementation, monitoring, reform
Procedia PDF Downloads 154132 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management
Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal
Abstract:
Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management
Procedia PDF Downloads 108131 Assessment of Food Safety Culture in Select Restaurants and a Produce Market in Doha, Qatar
Authors: Ipek Goktepe, Israa Elnemr, Hammad Asim, Hao Feng, Mosbah Kushad, Hee Park, Sheikha Alzeyara, Mohammad Alhajri
Abstract:
Food safety management in Qatar is under the shared oversight of multiple agencies in two government ministries (Ministry of Public Health and Ministry of Municipality and Environment). Despite the increasing number and diversity of the food service establishments, no systematic food surveillance system is in place in the country, which creates a gap in terms of determining the food safety attitudes and practices applied in the food service operations. Therefore, this study seeks to partially address this gap through determination of food safety knowledge among food handlers, specifically with respect to food preparation and handling practices, and sanitation methods applied in food service providers (FSPs) and a major market in Doha, Qatar. The study covered a sample of 53 FSPs randomly selected out of 200 FSPs. Face-to-face interviews with managers at participating FSPs were conducted using a 40-questions survey. Additionally, 120 produce handlers who are in direct contact with fresh produce at the major produce market in Doha were surveyed using a questionnaire containing 21 questions. A written informed consent was obtained from each survey participant. The survey data were analyzed using the chi-square test and correlation test. The significance was evaluated at p ˂ 0.05. The results from the FSPs surveys indicated that the average age of FSPs was 11 years, with the oldest and newest being established in 1982 and 2015, respectively. Most managers (66%) had college degree and 68% of them were trained on the food safety management system known as HACCP. These surveys revealed that FSP managers’ training and education level were highly correlated with the probability of their employees receiving food safety training while managers with lower education level had no formal training on food safety for themselves nor for their employees. Casual sit-in and fine dine-in restaurants consistently kept records (100%), followed by fast food (36%), and catering establishments (14%). The produce handlers’ survey results showed that none of the workers had any training on safe produce handling practices. The majority of the workers were in the age range of 31-40 years (37%) and only 38% of them had high-school degree. Over 64% of produce handlers claimed to wash their hands 4-5 times per day but field observations pointed limited handwashing as there was soap in the settings. This observation suggests potential food safety risks since a significant correlation (p ˂ 0.01) between the educational level and the hand-washing practices was determined. This assessment on food safety culture through determination of food and produce handlers' level of knowledge and practices, the first of its kind in Qatar, demonstrated that training and education are important factors which directly impact the food safety culture in FSPs and produce markets. These findings should help in identifying the need for on-site training of food handlers for effective food safety practices in food establishments in Qatar.Keywords: food safety, food safety culture, food service providers, food handlers
Procedia PDF Downloads 342130 Investigation of Alumina Membrane Coated Titanium Implants on Osseointegration
Authors: Pinar Erturk, Sevde Altuntas, Fatih Buyukserin
Abstract:
In order to obtain an effective integration between an implant and a bone, implant surfaces should have similar properties to bone tissue surfaces. Especially mimicry of the chemical, mechanical and topographic properties of the implant to the bone is crucial for fast and effective osseointegration. Titanium-based biomaterials are more preferred in clinical use, and there are studies of coating these implants with oxide layers that have chemical/nanotopographic properties stimulating cell interactions for enhanced osseointegration. There are low success rates of current implantations, especially in craniofacial implant applications, which are large and vital zones, and the oxide layer coating increases bone-implant integration providing long-lasting implants without requiring revision surgery. Our aim in this study is to examine bone-cell behavior on titanium implants with an aluminum oxide layer (AAO) on effective osseointegration potential in the deformation of large zones with difficult spontaneous healing. In our study, aluminum layer coated titanium surfaces were anodized in sulfuric, phosphoric, and oxalic acid, which are the most common used AAO anodization electrolytes. After morphologic, chemical, and mechanical tests on AAO coated Ti substrates, viability, adhesion, and mineralization of adult bone cells on these substrates were analyzed. Besides with atomic layer deposition (ALD) as a sensitive and conformal technique, these surfaces were coated with pure alumina (5 nm); thus, cell studies were performed on ALD-coated nanoporous oxide layers with suppressed ionic content too. Lastly, in order to investigate the effect of the topography on the cell behavior, flat non-porous alumina layers on silicon wafers formed by ALD were compared with the porous ones. Cell viability ratio was similar between anodized surfaces, but pure alumina coated titanium and anodized surfaces showed a higher viability ratio compared to bare titanium and bare anodized ones. Alumina coated titanium surfaces, which anodized in phosphoric acid, showed significantly different mineralization ratios after 21 days over other bare titanium and titanium surfaces which anodized in other electrolytes. Bare titanium was the second surface that had the highest mineralization ratio. Otherwise, titanium, which is anodized in oxalic acid electrolyte, demonstrated the lowest mineralization. No significant difference was shown between bare titanium and anodized surfaces except AAO titanium surface anodized in phosphoric acid. Currently, osteogenic activities of these cells on the genetic level are investigated by quantitative real-time polymerase chain reaction (qRT-PCR) analysis results of RUNX-2, VEGF, OPG, and osteopontin genes. Also, as a result of the activities of the genes mentioned before, Western Blot will be used for protein detection. Acknowledgment: The project is supported by The Scientific and Technological Research Council of Turkey.Keywords: alumina, craniofacial implant, MG-63 cell line, osseointegration, oxalic acid, phosphoric acid, sulphuric acid, titanium
Procedia PDF Downloads 131129 Food Design as a University-Industry Collaboration Project: An Experience Design on Controlling Chocolate Consumption and Long-Term Eating Behavior
Authors: Büşra Durmaz, Füsun Curaoğlu
Abstract:
While technology-oriented developments in the modern world change our perceptions of time and speed, they also force our food consumption patterns, such as getting pleasure from what we eat and eating slowly. The habit of eating quickly and hastily causes not only the feeling of not understanding the taste of the food eaten but also the inability to postpone the feeling of satiety and, therefore, many health problems. In this context, especially in the last ten years, in the field of industrial design, food manufacturers for healthy living and consumption have been collaborating with industrial designers on food design. The consumers of the new century, who are in an uncontrolled time intensity, receive support from small snacks as a source of happiness and pleasure in the little time intervals they can spare. At this point, especially chocolate has been a source of happiness for its consumers as a source of both happiness and pleasure for hundreds of years. However, when the portions have eaten cannot be controlled, a pleasure food such as chocolate can cause both health problems and many emotional problems, especially the feeling of guilt. Fast food, which is called food that is prepared and consumed quickly, has been increasing rapidly around the world in recent years. This study covers the process and results of a chocolate design based on the user experience of a university-industry cooperation project carried out within the scope of Eskişehir Technical University graduation projects. The aim of the project is a creative product design that will enable the user to experience chocolate consumption with a healthy eating approach. For this, while concepts such as pleasure, satiety, and taste are discussed; A survey with 151 people and semi-structured face-to-face interviews with 7 people during the experience design process within the scope of the user-oriented design approach, mainly literature review, within the scope of main topics such as mouth anatomy, tongue structure, taste, the functions of the eating action in the brain, hormones and chocolate, video A case study based on the research paradigm of Qualitative Research was structured within the scope of different research processes such as analysis and project diaries. As a result of the research, it has been reached that the melting in the mouth is the preferred experience of the users in order to spread the experience of eating chocolate for a long time based on pleasure while eating chocolate with healthy portions. In this context, researches about the production of sketches, mock-ups and prototypes of the product are included in the study. As a result, a product packaging design has been made that supports the active role of the senses such as sight, smell and hearing, where consumption begins, in order to consume chocolate by melting and to actively secrete the most important stimulus salivary glands in order to provide a healthy and long-term pleasure-based consumption.Keywords: chocolate, eating habit, pleasure, saturation, sense of taste
Procedia PDF Downloads 81128 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 260127 Achieving Sustainable Agriculture with Treated Municipal Wastewater
Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi
Abstract:
Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation
Procedia PDF Downloads 321126 The Effect of Framework Structure on N2O Formation over Cu-Based Zeolites during NH3-SCR Reactions
Authors: Ghodsieh Isapour Toutizad, Aiyong Wang, Joonsoo Han, Derek Creaser, Louise Olsson, Magnus Skoglundh, Hanna HaRelind
Abstract:
Nitrous oxide (N2O), which is generally formed as a byproduct of industrial chemical processes and fossil fuel combustion, has attracted considerable attention due to its destructive role in global warming and ozone layer depletion. From various developed technologies used for lean NOx reduction, the selective catalytic reduction (SCR) of NOx with ammonia is presently the most applied method. Therefore, the development of catalysts for efficient lean NOx reduction without forming N2O in the process, or only forming it to a very small extent from the exhaust gases is of crucial significance. One type of catalysts that nowadays are used for this aim are zeolite-based catalysts. It is owing to their remarkable catalytic performance under practical reaction conditions such as high thermal stability and high N2 selectivity. Among all zeolites, copper ion-exchanged zeolites, with CHA, MFI, and BEA framework structure (like SSZ-13, ZSM-5 and Beta, respectively), represent higher hydrothermal stability, high activity and N2 selectivity. This work aims at investigating the effect of the zeolite framework structure on the formation of N2O during NH3-SCR reaction conditions over three Cu-based zeolites ranging from small-pore to large-pore framework structure. In the zeolite framework, Cu exists in two cationic forms, that can catalyze the SCR reaction by activating NO to form NO+ and/or surface nitrate species. The nitrate species can thereafter react with NH3 to form another intermediate, ammonium nitrate, which seems to be one source for N2O formation at low temperatures. The results from in situ diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) indicate that during the NO oxidation step, mainly NO+ and nitrate species are formed on the surface of the catalysts. The intensity of the absorption peak attributed to NO+ species is higher for the Cu-CHA sample compared to the other two samples, indicating a higher stability of this species in small cages. Furthermore, upon the addition of NH3, through the standard SCR reaction conditions, absorption peaks assigned to N-H stretching and bending vibrations are building up. At the same time, negative peaks are evolving in the O-H stretching region, indicating blocking/replacement of surface OH-groups by NH3 and NH4+. By removing NH3 and adding NO2 to the inlet gas composition, the peaks in the N-H stretching and bending vibration regions show a decreasing trend in intensity, with the decrease being more pronounced for increasing pore size. It can probably be owing to the higher accumulation of ammonia species in the small-pore size zeolite compared to the other two samples. Furthermore, it is worth noting that the ammonia surface species are strongly bonded to the CHA zeolite structure, which makes it more difficult to react with NO2. To conclude, the framework structure of the zeolite seems to play an important role in the formation and reactivity of surface species relevant for the SCR process. Here we intend to discuss the connection between the zeolite structure, the surface species, and the formation of N2O during ammonia-SCR.Keywords: fast SCR, nitrous oxide, NOx, standard SCR, zeolites
Procedia PDF Downloads 237125 Assessment of Sleeping Patterns of Saudis with Type 2 Diabetes Mellitus in Ramadan and Non-Ramadan Periods Using a Wearable Device and a Questionnaire
Authors: Abdullah S. Alghamdi, Khaled Alghamdi, Richard O. Jenkins, Parvez I. Haris
Abstract:
Background: Quantity and quality of sleep have been reported to be significant risk factors for obesity and development of metabolic disorders such as type 2 diabetes mellitus (T2DM). The relationship between diabetes and sleep quantity was reported to be U-shaped, which means increased or decreased sleeping hours can increase the risk of diabetes. The plasma glucagon levels were found to continuously decrease during night-time sleep in healthy individuals, independently of blood glucose and insulin levels. The disturbance of the circadian rhythm is also important and has been linked with an increased the chance of diabetes incidence. There is a lack of research on sleep patterns on Saudis with T2DM and how this is affected by Ramadan fasting. Aim: To assess the sleeping patterns of Saudis with T2DM (before, during, and after Ramadan), using two different techniques and relate this to their HbA1c levels. Method: This study recruited 82 Saudi with T2DM, who chose to fast during Ramadan, from the Endocrine and Diabetic Centre of Al Iman General Hospital, Riyadh, Saudi Arabia. Ethical approvals for the study were obtained from De Montfort University and Saudi Ministry of Health. Their sleeping patterns were assessed by a self-administered questionnaire (before, during, and after Ramadan). The assessment included the daily total sleeping hours (DTSH), and total night-time sleeping hours (TNTSH) of the participants. In addition, sleeping patterns of 36 patients, randomly selected from the 82 participants, were further tracked during and after Ramadan by using Fitbit Flex 2™ accelerometer. Blood samples were collected in each period for measuring HbA1c. Results: Questionnaire analysis revealed that the sleeping patterns significantly changed between the periods, with shorter hours during Ramadan (P < 0.001 for DTSH, and P < 0.001 for TNTSH). These findings were confirmed by the Fitbit data, which also indicated significant shorter sleeping hours for the DTSH, and the TNTSH during Ramadan (P < 0.001 and P < 0.001, respectively). Although there were no significant correlations between the questionnaire and Fitbit data, the TNTSH were shorter among the participants in all periods by both techniques. The mean HbA1c significantly varied between periods, with lowest level during Ramadan. Although the statistical tests did not show significant variances in the mean HbA1c between the groups of participants regarding their hours of sleeping, the lowest mean HbA1c was observed in the group of participants who slept for 6-8 hours and had longer night-time sleeping hours. Conclusion: A short sleep duration, and absence of night-time sleep were significantly observed among the majority of the study population during Ramadan, which could suppress the full benefits of Ramadan fasting for diabetic patients. This study showed that there is a good agreement between the findings of the questionnaire and the Fitbit device for evaluating sleeping patterns in a Saudi population. A larger study is needed in the future to investigate the impact of Ramadan fasting on sleep quality and quantity and its relationship with health and disease.Keywords: Diabetes, Fasting, Fitbit, HbA1c, IPAQ, Ramadan, Sleep
Procedia PDF Downloads 114124 Major Role of Social Media in Encouraging Public Interaction with Health Awareness: A Case Study of Successful Saudi Diabetes Campaign
Authors: Budur Almutairi
Abstract:
Introduction: There is an alarming increase in the number of diabetic patients in Saudi Arabia during the last twenty years. The World Health Organization (WHO) reports that the country ranks seventh in the world for the rate of diabetes. It is also estimated that around 7 million of the population are diabetic and almost around 3 million have pre-diabetes. The prevalence is more in urban area than in rural and more in women than in men and it is closely associated with the parallel rise in obesity rates. Diabetes is found to be contributing to the increasing mortality, morbidity and vascular complications and becoming a significant cause of medical complications and even death. The trends shown by the numbers are worrying as the prevalence is steadily doubling every two decades and particularly in Saudi Arabia, this could soon reach 50% in those over 50 years of age. The economic growth and prosperity have shown notable changes in the lifestyle of the people. Most importantly, along with an increased consumption of fast foods and sugar-rich carbonated soft drinks, eating habits became less healthy and the level of physical activity is decreased. The simultaneous technological advancement and the introduction of new mechanical devices like, elevators, escalators, remotes and vehicles pushed people to a situation of leading a more sedentary life. This study is attempting to evaluate the success of the campaign that introduced through popular social media in the country. Methodology: The Ministry of Health (MoH) has initiated a novel method of campaign activity to generate discussion among public about diabetes. There were mythical monsters introduced through popular social media with disguised messages about the condition of diabetes has generated widespread discussions about the disease among the general public. The characters that started appearing in social media About 600 retweets of the original post was testimonial for the success of the Twitter campaign. The second most successful form of campaign was a video that adopted a very popular approach of using Dark Comedy in which, the diabetes was represented through a twisted negative character that talks about his meticulous plans of how he is going to take the common people into his clutches. This fictional character gained more popularity when introduced into twitter and people started interacting with him raising various questions and challenging his anti-social activities. Major findings: The video generated more than 3,200,000 views ranking 9th in You Tube’s most popular video in Saudi Arabia and was shared 7000 times in a single week. Also, the hashtag got over 4,500,000impressions and over one million visits. Conclusion: Diabetes mellitus in Saudi Arabia is emerging as an epidemic of massive proportions, threatening to negate the benefits of modernization and economic revival. It is highly possible that healthy practices connected with the prevention and management of DM can easily be implemented in a manner that does not conflict with the cultural milieu of Saudi Arabia.Keywords: campaign, diabetes, Saudi, social media
Procedia PDF Downloads 131123 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications
Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky
Abstract:
InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor
Procedia PDF Downloads 256122 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems
Authors: Ramprasad Srinivasan
Abstract:
Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation
Procedia PDF Downloads 66121 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 388120 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 126119 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 260118 Biomass Waste-To-Energy Technical Feasibility Analysis: A Case Study for Processing of Wood Waste in Malta
Authors: G. A. Asciak, C. Camilleri, A. Rizzo
Abstract:
The waste management in Malta is a national challenge. Coupled with Malta’s recent economic boom, which has seen massive growth in several sectors, especially the construction industry, drastic actions need to be taken. Wood waste, currently being dumped in landfills, is one type of waste which has increased astronomically. This research study aims to carry out a thorough examination on the possibility of using this waste as a biomass resource and adopting a waste-to-energy technology in order to generate electrical energy. This study is composed of three distinct yet interdependent phases, namely, data collection from the local SMEs, thermal analysis using the bomb calorimeter, and generation of energy from wood waste using a micro biomass plant. Data collection from SMEs specializing in wood works was carried out to obtain information regarding the available types of wood waste, the annual weight of imported wood, and to analyse the manner in which wood shavings are used after wood is manufactured. From this analysis, it resulted that five most common types of wood available in Malta which would suitable for generating energy are Oak (hardwood), Beech (hardwood), Red Beech (softwood), African Walnut (softwood) and Iroko (hardwood). Subsequently, based on the information collected, a thermal analysis using a 6200 Isoperibol calorimeter on the five most common types of wood was performed. This analysis was done so as to give a clear indication with regards to the burning potential, which will be valuable when testing the wood in the biomass plant. The experiments carried out in this phase provided a clear indication that the African Walnut generated the highest gross calorific value. This means that this type of wood released the highest amount of heat during the combustion in the calorimeter. This is due to the high presence of extractives and lignin, which accounts for a slightly higher gross calorific value. This is followed by Red Beech and Oak. Moreover, based on the findings of the first phase, both the African Walnut and Red Beech are highly imported in the Maltese Islands for use in various purposes. Oak, which has the third highest gross calorific value is the most imported and common wood used. From the five types of wood, three were chosen for use in the power plant on the basis of their popularity and their heating values. The PP20 biomass plant was used to burn the three types of shavings in order to compare results related to the estimated feedstock consumed by the plant, the high temperatures generated, the time taken by the plant to produce gasification temperatures, and the projected electrical power attributed to each wood type. From the experiments, it emerged that whilst all three types reached the required gasification temperature and thus, are feasible for electrical energy generation. African Walnut was deemed to be the most suitable fast-burning fuel. This is followed by Red-beech and Oak, which required a longer period of time to reach the required gasification temperatures. The results obtained provide a clear indication that wood waste can not only be treated instead of being dumped in dumped in landfill but coupled.Keywords: biomass, isoperibol calorimeter, waste-to-energy technology, wood
Procedia PDF Downloads 243117 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105116 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification
Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado
Abstract:
DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles
Procedia PDF Downloads 248115 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens
Authors: R. Tamborrino, F. Rinaudo
Abstract:
Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities
Procedia PDF Downloads 191114 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar
Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh
Abstract:
Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring
Procedia PDF Downloads 180113 Selling Electric Vehicles: Experiences from Car Salesmen in Sweden
Authors: Jens Hagman, Jenny Janhager Stier, Ellen Olausson, Anne Y. Faxer, Ana Magazinius
Abstract:
Sweden has the second highest electric vehicle (plug-in hybrid and battery electric vehicle) sales per capita in Europe but in relation to sales of internal combustion engine electric vehicles sales are still minuscular (< 4%). Much research effort has been placed on various technical and user focused barriers and enablers for adoption of electric vehicles. Less effort has been placed on investigating the retail (dealership-customer) sales process of vehicles in general and electric vehicles in particular. Arguably, no one ought to be better informed about needs and desires of potential electric vehicle buyers than car salesmen, originating from their daily encounters with customers at the dealership. The aim of this paper is to explore the conditions of selling electric vehicle from a car salesmen’s perspective. This includes identifying barriers and enablers for electric vehicle sales originating from internal (dealership and brand) and external (customer, government) sources. In this interview study five car brands (manufacturers) that sell both electric and internal combustion engine vehicles have been investigated. A total of 15 semi-structured interviews have been conducted (three per brand, in rural and urban settings and at different dealerships). Initial analysis reveals several barriers and enablers, experienced by car salesmen, which influence electric vehicle sales. Examples of as reported by car salesmen identified barriers are: -Electric vehicles earn car salesmen less commission on average compared to internal combustion engine vehicles. -It takes more time to sell and deliver an electric vehicle than an internal combustion engine vehicle. -Current leasing contracts entails relatively low second-hand value estimations for electric vehicles and thus a high leasing fee, which negatively affects the attractiveness of electric vehicles for private consumers in particular. -High purchasing price discourages many consumers from considering electric vehicles. -The education and knowledge level of electric vehicles differs between car salesmen, which could affect their self-confidence in meeting well prepared and question prone electric vehicle buyers. Examples of identified enablers are: -Company car tax regulation promotes sales of electric vehicles; in particular, plug-in hybrid electric vehicles are sold extensively to companies (up to 95 % of sales). -Low operating cost of electric vehicles such as fuel and service is an advantage when understood by consumers. -The drive performance of electric vehicles (quick, silent and fun to drive) is attractive to consumers. -Environmental aspects are considered important for certain consumer groups. -Fast technological improvements, such as increased range are opening up a wider market for electric vehicles. -For one of the brands; attractive private lease campaigns have proved effective to promote sales. This paper gives insights of an important but often overlooked aspect for the diffusion of electric vehicles (and durable products in general); the interaction between car salesmen and customers at the critical acquiring moment. Extracted through interviews with multiple car salesmen. The results illuminate untapped potential for sellers (salesmen, dealerships and brands) to mitigating sales barriers and strengthening sales enablers and thus becoming a more important actor in the electric vehicle diffusion process.Keywords: customer barriers, electric vehicle promotion, sales of electric vehicles, interviews with car salesmen
Procedia PDF Downloads 229112 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 73111 Mycophenolate-Induced Disseminated TB in a PPD-Negative Patient
Authors: Megan L. Srinivas
Abstract:
Individuals with underlying rheumatologic diseases such as dermatomyositis may not adequately respond to tuberculin (PPD) skin tests, creating false negative results. These illnesses are frequently treated with immunosuppressive therapy making proper identification of TB infection imperative. A 59-year-old Filipino man was diagnosed with dermatomyositis on the basis of rash, electromyography, and muscle biopsy. He was initially treated with IVIG infusions and transitioned to oral prednisone and mycophenolate. The patient’s symptoms improved on this regimen. Six months after starting mycophenolate, the patient began having fevers, night sweats, and productive cough without hemoptysis. He moved from the Philippines 5 years prior to dermatomyositis diagnosis, denied sick contacts, and was PPD negative both at immigration and immediately prior to starting mycophenolate treatment. A third PPD was negative following the onset of these new symptoms. He was treated for community-acquired pneumonia, but symptoms worsened over 10 days and he developed watery diarrhea and a growing non-tender, non-mobile mass on the left side of his neck. A chest x-ray demonstrated a cavitary lesion in right upper lobe suspicious for TB that had not been present one month earlier. Chest CT corroborated this finding also exhibiting necrotic hilar and paratracheal lymphadenopathy. Neck CT demonstrated the left-sided mass as cervical chain lymphadenopathy. Expectorated sputum and stool samples contained acid-fast bacilli (AFB), cultures showing TB bacteria. Fine-needle biopsy of the neck mass (scrofula) also exhibited AFB. An MRI brain showed nodular enhancement suspected to be a tuberculoma. Mycophenolate was discontinued and dermatomyositis treatment was switched to oral prednisone with a 3-day course of IVIG. The patient’s infection showed sensitivity to standard RIPE (rifampin, isoniazid, pyrazinamide, and ethambutol) treatment. Within a week of starting RIPE, the patient’s diarrhea subsided, scrofula diminished, and symptoms significantly improved. By the end of treatment week 3, the patient’s sputum no longer contained AFB; he was removed from isolation, and was discharged to continue RIPE at home. He was discharged on oral prednisone, which effectively addressed his dermatomyositis. This case illustrates the unreliability of PPD tests in patients with long-term inflammatory diseases such as dermatomyositis. Other immunosuppressive therapies (adalimumab, etanercept, and infliximab) have been affiliated with conversion of latent TB to disseminated TB. Mycophenolate is another immunosuppressive agent with similar mechanistic properties. Thus, it is imperative that patients with long-term inflammatory diseases and high-risk TB factors initiating immunosuppressive therapy receive a TB blood test (such as a quantiferon gold assay) prior to the initiation of therapy to ensure that latent TB is unmasked before it can evolve into a disseminated form of the disease.Keywords: dermatomyositis, immunosuppressant medications, mycophenolate, disseminated tuberculosis
Procedia PDF Downloads 208110 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium
Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte
Abstract:
The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation
Procedia PDF Downloads 264109 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 133