Search results for: geometrical attack
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1001

Search results for: geometrical attack

101 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 84
100 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob

Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.

Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound

Procedia PDF Downloads 239
99 Spontaneous Rupture of Splenic Artery Pseudoaneurysm; A Rare Presentation of Acute Abdominal Pain in the Emergency Department: Case Report

Authors: Zainab Elazab, Azhar Aziz

Abstract:

Background: Spontaneous Splenic artery pseudoaneurysm rupture is a rare condition which is potentially life threatening, if not detected and managed early. We report a case of abdominal pain with intraperitoneal free fluid, which turned out to be spontaneous rupture of a splenic artery pseudoaneurysm, and was treated with arterial embolization. Case presentation: A 28-year old, previously healthy male presented to the ED with a history of sudden onset upper abdominal pain and fainting attack. The patient denied any history of trauma or prior similar attacks. On examination, the patient had tachycardia and a low-normal BP (HR 110, BP 106/66) but his other vital signs were normal (Temp. 37.2, RR 18 and SpO2 100%). His abdomen was initially soft with mild tenderness in the upper region. Blood tests showed leukocytosis of 12.3 X109/L, Hb of 12.6 g/dl and lactic acid of 5.9 mmol/L. Ultrasound showed trace of free fluid in the perihepatic and perisplenic areas, and a splenic hypoechoic lesion. The patient remained stable; however, his abdomen became increasingly tender with guarding. We made a provisional diagnosis of a perforated viscus and the patient was started on IV fluids and IV antibiotics. An erect abdominal x-ray did not show any free air under the diaphragm so a CT abdomen was requested. Meanwhile, bedside ultrasound was repeated which showed increased amount of free fluid, suggesting intra-abdominal bleeding as the most probable etiology for the condition. His CT abdomen revealed a splenic injury with multiple lacerations, a focal intrasplenic enhancing area on venous phase scan (suggesting a pseudoaneurysm with associated splenic intraparenchymal, sub capsular and perisplenic hematomas). Free fluid in the subhepatic and intraperitoneal regions along the small bowel was also detected. Angiogram was done which confirmed a diagnosis of pseudoaneurysm of intrasplenic arterial branch, and angio-embolization was done to control the bleeding. The patient was later discharged in good condition with a surgery follow-up. Conclusion: Splenic artery pseudoaneurysm rupture is a rare cause of abdominal pain which should be considered in any case of abdominal pain with intraperitoneal bleeding. Early management is crucial as it carries a high mortality. Bedside ultrasound is a useful tool to help for early diagnosis of such cases.

Keywords: abdominal pain, pseudo aneurysm, rupture, splenic artery

Procedia PDF Downloads 275
98 A Theoretical Approach of Tesla Pump

Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu

Abstract:

This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.

Keywords: lubrication, temperature, tesla-pump, viscosity

Procedia PDF Downloads 156
97 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 84
96 Using Passive Cooling Strategies to Reduce Thermal Cooling Load for Coastal High-Rise Buildings of Jeddah, Saudi Arabia

Authors: Ahmad Zamzam

Abstract:

With the development of the economy in recent years, Saudi Arabia has been maintaining high economic growth. Therefore, its energy consumption has increased dramatically. This economic growth reflected on the expansion of high-rise tower's construction. Jeddah coastal strip (cornice) has many high-rise buildings planned to start next few years. These projects required a massive amount of electricity that was not planned to be supplied by the old infrastructure. This research studies the effect of the building envelope on its thermal performance. It follows a parametric simulation methodology using Ecotect software to analyze the effect of the building envelope design on its cooling energy load for an office high-rise building in Jeddah, Saudi Arabia, which includes building geometrical form, massing treatments, orientation and glazing type effect. The research describes an integrated passive design approach to reduce the cooling requirement for high-rise building through an improved building envelope design. The research used Ecotect to make four simulation studies; the first simulation compares the thermal performance of five high-rise buildings, presenting the basic shape of the plan. All the buildings have the same plan area and same floor height. The goal of this simulation is to find out the best shape for the thermal performance. The second simulation studies the effect of orientation on the thermal performance by rotating the same building model to find out the best and the worst angle for the building thermal performance. The third simulation studies the effect of the massing treatment on the total cooling load. It compared five models with different massing treatment, but with the same total built up area. The last simulation studied the effect of the glazing type by comparing the total cooling load of the same building using five different glass type and also studies the feasibility of using these glass types by studying the glass cost effect. The results indicate that using the circle shape as building plan could reduce the thermal cooling load by 40%. Also, using shading devices could reduce the cooling loads by 5%. The study states that using any of the massing grooving, recess or any treatment that could increase the outer exposed surface is not preferred and will decrease the building thermal performance. Also, the result shows that the best direction for glazing and openings from thermal performance viewpoint in Jeddah is the North direction while the worst direction is the East one. The best direction angle for openings - regarding the thermal performance in Jeddah- is 15 deg West and the worst is 250 deg West (110 deg East). Regarding the glass type effect, comparing to the double glass with air fill type as a reference case, the double glass with Air-Low-E will save 14% from the required amount of the thermal cooling load annually. Argon fill and triple glass will save 16% and 17% from the total thermal cooling load respectively, but for the glass cost purpose, using the Argon fill and triple glass is not feasible.

Keywords: passive cooling, reduce thermal load, Jeddah, building shape, energy

Procedia PDF Downloads 102
95 Enhancement of Shelflife of Malta Fruit with Active Packaging

Authors: Rishi Richa, N. C. Shahi, J. P. Pandey, S. S. Kautkar

Abstract:

Citrus fruits rank third in area and production after banana and mango in India. Sweet oranges are the second largest citrus fruits cultivated in the country. Andhra Pradesh, Maharashtra, Karnataka, Punjab, Haryana, Rajasthan, and Uttarakhand are the main sweet orange-growing states. Citrus fruits occupy a leading position in the fruit trade of Uttarakhand, is casing about 14.38% of the total area under fruits and contributing nearly 17.75 % to the total fruit production. Malta is grown in most of the hill districts of the Uttarakhand. Malta common is having high acceptability due to its attractive colour, distinctive flavour, and taste. The excellent quality fruits are generally available for only one or two months. However due to its less shelf-life, Malta can not be stored for longer time under ambient conditions and cannot be transported to distant places. Continuous loss of water adversely affects the quality of Malta during storage and transportation. Method of picking, packaging, and cold storage has detrimental effects on moisture loss. The climatic condition such as ambient temperature, relative humidity, wind condition (aeration) and microbial attack greatly influences the rate of moisture loss and quality. Therefore, different agro-climatic zone will have different moisture loss pattern. The rate of moisture loss can be taken as one of the quality parameters in combination of one or more parameter such as RH, and aeration. The moisture contents of the fruits and vegetables determine their freshness. Hence, it is important to maintain initial moisture status of fruits and vegetable for prolonged period after the harvest. Keeping all points in views, effort was made to store Malta at ambient condition. In this study, the response surface method and experimental design were applied for optimization of independent variables to enhance the shelf life of four months stored malta. Box-Benkhen design, with, 12 factorial points and 5 replicates at the centre point were used to build a model for predicting and optimizing storage process parameters. The independent parameters, viz., scavenger (3, 4 and 5g), polythene thickness (75, 100 and 125 gauge) and fungicide concentration (100, 150 and 200ppm) were selected and analyzed. 5g scavenger, 125 gauge and 200ppm solution of fungicide are the optimized value for storage which may enhance life up to 4months.

Keywords: Malta fruit, scavenger, packaging, shelf life

Procedia PDF Downloads 256
94 Electrochemical Performance of Femtosecond Laser Structured Commercial Solid Oxide Fuel Cells Electrolyte

Authors: Mohamed A. Baba, Gazy Rodowan, Brigita Abakevičienė, Sigitas Tamulevičius, Bartlomiej Lemieszek, Sebastian Molin, Tomas Tamulevičius

Abstract:

Solid oxide fuel cells (SOFC) efficiently convert hydrogen to energy without producing any disturbances or contaminants. The core of the cell is electrolyte. For improving the performance of electrolyte-supported cells, it is desirable to extend the available exchange surface area by micro-structuring of the electrolyte with laser-based micromachining. This study investigated the electrochemical performance of cells micro machined using a femtosecond laser. Commercial ceramic SOFC (Elcogen, AS) with a total thickness of 400 μm was structured by 1030 nm wavelength Yb: KGW fs-laser Pharos (Light Conversion) using 100 kHz repetition frequency and 290 fs pulse length light by scanning with the galvanometer scanner (ScanLab) and focused with a f-Theta telecentric lens (SillOptics). The sample height was positioned using a motorized z-stage. The microstructures were formed using a laser spiral trepanning in Ni/YSZ anode supported membrane at the central part of the ceramic piece of 5.5 mm diameter at active area of the cell. All surface was drilled with 275 µm diameter holes spaced by 275 µm. The machining processes were carried out under ambient conditions. The microstructural effects of the femtosecond laser treatment on the electrolyte surface were investigated prior to the electrochemical characterisation using a scanning electron microscope (SEM) Quanta 200 FEG (FEI). The Novo control Alpha-A was used for electrochemical impedance spectroscopy on a symmetrical cell configuration with an excitation amplitude of 25 mV and a frequency range of 1 MHz to 0.1 Hz. The fuel cell characterization of the cell was examined on open flanges test setup by Fiaxell. Using nickel mesh on the anode side and au mesh on the cathode side, the cell was electrically linked. The cell was placed in a Kittec furnace with a Process IDentifier temperature controller. The wires were connected to a Solartron 1260/1287 frequency analyzer for the impedance and current-voltage characterization. In order to determine the impact of the anode's microstructure on the performance of the commercial cells, the acquired results were compared to cells with unstructured anode. Geometrical studies verified that the depth of the -holes increased linearly according to laser energy and scanning times. On the other hand, it reduced as the scanning speed increased. The electrochemical analysis demonstrates that the open circuit voltage OCV values of the two cells are equal. Further, the modified cell's initial slope reduces to 0.209 from 0.253 of the unmodified cell, revealing that the surface modification considerably decreases energy loss. Plus, the maximum power density for the cell with the microstructure and the reference cell respectively, are 1.45 and 1.16 Wcm⁻².

Keywords: electrochemical performance, electrolyte-supported cells, laser micro-structuring, solid oxide fuel cells

Procedia PDF Downloads 40
93 Left Atrial Appendage Occlusion vs Oral Anticoagulants in Atrial Fibrillation and Coronary Stenting. The DESAFIO Registry

Authors: José Ramón López-Mínguez, Estrella Suárez-Corchuelo, Sergio López-Tejero, Luis Nombela-Franco, Xavier Freixa-Rofastes, Guillermo Bastos-Fernández, Xavier Millán-Álvarez, Raúl Moreno-Gómez, José Antonio Fernández-Díaz, Ignacio Amat-Santos, Tomás Benito-González, Fernando Alfonso-Manterola, Pablo Salinas-Sanguino, Pedro Cepas-Guillén, Dabit Arzamendi, Ignacio Cruz-González, Juan Manuel Nogales-Asensio

Abstract:

Background and objectives: The treatment of patients with non-valvular atrial fibrillation (NVAF) who need coronary stenting is challenging. The objective of the study was to determine whether left atrial appendage occlusion (LAAO) could be a feasible option and benefit these patients. To this end, we studied the impact of LAAO plus antiplatelet drugs vs oral anticoagulants (OAC) (including direct OAC) plus antiplatelet drugs in these patients’ long-term outcomes. Methods: The results of 207 consecutive patients with NVAF who underwent coronary stenting were analyzed. A total of 146 patients were treated with OAC (75 with acenocoumarol, 71 with direct OAC) while 61 underwent LAAO. The median follow-up was 35 months. Patients also received antiplatelet therapy as prescribed by their cardiologist. The study received the proper ethical oversight. Results: Age (mean 75.7 years), and the past medical history of stroke were similar in both groups. However, the LAAO group had more unfavorable characteristics (history of coronary artery disease [CHA2DS2-VASc], and significant bleeding [BARC ≥ 2] and HAS-BLED). The occurrence of major adverse events (death, stroke/transient ischemic events, major bleeding) and major cardiovascular events (cardiac death, stroke/transient ischemic attack, and myocardial infarction) were significantly higher in the OAC group compared to the LAAO group: 19.75% vs 9.06% (HR, 2.18; P = .008) and 6.37% vs 1.91% (HR, 3.34; P = .037), respectively. Conclusions: In patients with NVAF undergoing coronary stenting, LAAO plus antiplatelet therapy produced better long-term outcomes compared to treatment with OAC plus antiplatelet therapy despite the unfavorable baseline characteristics of the LAAO group.

Keywords: stents, atrial fibrillation, anticoagulants, left atrial appendage occlusion

Procedia PDF Downloads 23
92 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.

Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model

Procedia PDF Downloads 119
91 Analytical, Numerical, and Experimental Research Approaches to Influence of Vibrations on Hydroelastic Processes in Centrifugal Pumps

Authors: Dinara F. Gaynutdinova, Vladimir Ya Modorsky, Nikolay A. Shevelev

Abstract:

The problem under research is that of unpredictable modes occurring in two-stage centrifugal hydraulic pump as a result of hydraulic processes caused by vibrations of structural components. Numerical, analytical and experimental approaches are considered. A hypothesis was developed that the problem of unpredictable pressure decrease at the second stage of centrifugal pumps is caused by cavitation effects occurring upon vibration. The problem has been studied experimentally and theoretically as of today. The theoretical study was conducted numerically and analytically. Hydroelastic processes in dynamic “liquid – deformed structure” system were numerically modelled and analysed. Using ANSYS CFX program engineering analysis complex and computing capacity of a supercomputer the cavitation parameters were established to depend on vibration parameters. An influence domain of amplitudes and vibration frequencies on concentration of cavitation bubbles was formulated. The obtained numerical solution was verified using CFM program package developed in PNRPU. The package is based on a differential equation system in hyperbolic and elliptic partial derivatives. The system is solved by using one of finite-difference method options – the particle-in-cell method. The method defines the problem solution algorithm. The obtained numerical solution was verified analytically by model problem calculations with the use of known analytical solutions of in-pipe piston movement and cantilever rod end face impact. An infrastructure consisting of an experimental fast hydro-dynamic processes research installation and a supercomputer connected by a high-speed network, was created to verify the obtained numerical solutions. Physical experiments included measurement, record, processing and analysis of data for fast processes research by using National Instrument signals measurement system and Lab View software. The model chamber end face oscillated during physical experiments and, thus, loaded the hydraulic volume. The loading frequency varied from 0 to 5 kHz. The length of the operating chamber varied from 0.4 to 1.0 m. Additional loads weighed from 2 to 10 kg. The liquid column varied from 0.4 to 1 m high. Liquid pressure history was registered. The experiment showed dependence of forced system oscillation amplitude on loading frequency at various values: operating chamber geometrical dimensions, liquid column height and structure weight. Maximum pressure oscillation (in the basic variant) amplitudes were discovered at loading frequencies of approximately 1,5 kHz. These results match the analytical and numerical solutions in ANSYS and CFM.

Keywords: computing experiment, hydroelasticity, physical experiment, vibration

Procedia PDF Downloads 226
90 Corrosion Analysis of a 3-1/2” Production Tubing of an Offshore Oil and Gas Well

Authors: Suraj Makkar, Asis Isor, Jeetendra Gupta, Simran Bareja, Maushumi K. Talukdar

Abstract:

During the exploratory testing phase of an offshore oil and gas well, when the tubing string was pulled out after production testing, it was observed that there was visible corrosion/pitting in a few of the 3-1/2” API 5 CT L-80 Grade tubing. The area of corrosion was at the same location in all the tubing, i.e., just above the pin end. Since the corrosion was observed in the tubing within two months of their installation, it was a matter of concern, as it could lead to premature failures resulting in leakages and production loss and thus affecting the integrity of the asset. Therefore, the tubing was analysed to ascertain the mechanism of the corrosion occurring on its surface. During the visual inspection, it was observed that the corrosion was totally external, which was near the pin end, and no significant internal corrosion was observed. The chemical compositional analysis and mechanical properties (tensile and impact) show that the pipeline material was conforming to API 5 CT L-80 specifications. The metallographic analysis of the tubing revealed tempered martensitic microstructure. The grain size was observed to be different at the pin end as compared to the microstructure at base metal. The microstructures of the corroded area near threads reveal an oriented microstructure. The clearly oriented microstructure of the cold-worked zone near threads and the difference in microstructure represents inappropriate heat treatment after cold work. This was substantiated by hardness test results as well, which show higher hardness at the pin end in comparison to hardness at base metal. Scanning Electron Microscope (SEM) analysis revealed the presence of round and deep pits and cracks on the corroded surface of the tubing. The cracks were stress corrosion cracks in a corrosive environment arising out of the residual stress, which was not relieved after cold working, as mentioned above. Energy Dispersive Spectroscopy (EDS) analysis indicates the presence of mainly Fe₂O₃, Chlorides, Sulphides, and Silica in the corroded part indicating the interaction of the tubing with the well completion fluid and well bore environment. Thus it was concluded that residual stress after the cold working of male pins during threading and the corrosive environment acted in synergy to cause this pitting corrosion attack on the highly stressed zone along the circumference of the tubing just below the threaded area. Accordingly, the following suitable recommendations were given to avoid the recurrence of such corrosion problems in the wells. (i) After any kind of hot work/cold work, tubing should be normalized at full length to achieve uniform microstructure throughout its length. (ii) Heat treatment requirements (as per API 5 CT) should be part of technical specifications while at the procurement stage.

Keywords: pin end, microstructure, grain size, stress corrosion cracks

Procedia PDF Downloads 50
89 Evaluation of Cardiac Rhythm Patterns after Open Surgical Maze-Procedures from Three Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, H. H. Sievers, B. Nasseri, S. A. Mohamed

Abstract:

In order to optimize the efficacy of medications, the regular follow-up with long-term continuous monitoring of heart rhythmic patterns has been facilitated since clinical introduction of cardiac implantable electronic monitoring devices (CIMD). Extensive analysis of rhythmic circadian properties is capable to disclose the distributions of arrhythmic events, which may support appropriate medication according rate-/rhythm-control strategy and minimize consequent afflictions. 348 patients (69 ± 0.5ys, male 61.8%) with predisposed atrial fibrillation (AF), undergoing primary ablating therapies combined to coronary or valve operations and secondary implantation of CIMDs, were involved and divided into 3 groups such as PAAF (paroxysmal AF) (n=99, male 68.7%), PEAF (persistent AF) (n=94, male 62.8%), and LSPEAF (long-standing persistent AF) (n=155, male 56.8%). All patients participated in three-year ambulant follow-up (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation recurrence were assessed using cardiac monitor devices, whereby attacks frequencies and their circadian patterns were systemically analyzed. Anticoagulants and regular anti-arrhythmic medications were evaluated and the last were listed in terms of anti-rate and anti-rhythm regimens. Patients in the PEAF-group showed the least AF-burden after surgical ablating procedures compared to both of the other subtypes (p < 0.05). The AF-recurrences predominantly performed such attacks’ property as shorter than one hour, namely within 10 minutes (p < 0.05), regardless of AF-subtypes. Concerning circadian distribution of the recurrence attacks, frequent AF-attacks were mostly recorded in the morning in the PAAF-group (p < 0.05), while the patients with predisposed PEAF complained less attack-induced discomforts in the latter half of the night and the ones with LSPEAF only if they were not physically active after primary surgical ablations. Different AF-subtypes presented distinct therapeutic efficacies after appropriate surgical ablating procedures and recurrence properties in sense of circadian distribution. An optimization of medical regimen and drug dosages to maintain the therapeutic success needs more attention to detailed assessment of the long-term follow-up. Rate-control strategy plays a much more important role than rhythm-control in the ongoing follow-up examinations.

Keywords: atrial fibrillation, CIMD, MAZE, rate-control, rhythm-control, rhythm patterns

Procedia PDF Downloads 129
88 Effects of Probiotic Pseudomonas fluorescens on the Growth Performance, Immune Modulation, and Histopathology of African Catfish (Clarias gariepinus)

Authors: Nelson R. Osungbemiro, O. A. Bello-Olusoji, M. Oladipupo

Abstract:

This study was carried out to determine the effects of probiotics Pseudomonas fluorescens on the growth performance, histology examination and immune modulation of African Catfish, (Clarias gariepinus) challenged with Clostridium botulinum. P. fluorescens, and C. botulinum isolates were removed from the gut, gill and skin organs of procured adult samples of Clarias gariepinus from commercial fish farms in Akure, Ondo State, Nigeria. The physical and biochemical tests were performed on the bacterial isolates using standard microbiological techniques for their identification. Antibacterial activity tests on P. fluorescens showed inhibition zone with mean value of 3.7 mm which indicates high level of antagonism. The experimental diets were prepared at different probiotics bacterial concentration comprises of five treatments of different bacterial suspension, including the control (T1), T2 (10³), T3 (10⁵), T4 (10⁷) and T5 (10⁹). Three replicates for each treatment type were prepared. Growth performance and nutrients utilization indices were calculated. The proximate analysis of fish carcass and experimental diet was carried out using standard methods. After feeding for 70 days, haematological values and histological test were done following standard methods; also a subgroup from each experimental treatment was challenged by inoculating Intraperitonieally (I/P) with different concentration of pathogenic C. botulinum. Statistically, there were significant differences (P < 0.05) in the growth performance and nutrient utilization of C. gariepinus. Best weight gain and feed conversion ratio were recorded in fish fed T4 (10⁷) and poorest value obtained in the control. Haematological analyses of C. gariepinus fed the experimental diets indicated that all the fish fed diets with P. fluorescens had marked significantly (p < 0.05) higher White Blood Cell than the control diet. The results of the challenge test showed that fish fed the control diet had the highest mortality rate. Histological examination of the gill, intestine, and liver of fish in this study showed several histopathological alterations in fish fed the control diets compared with those fed the P. fluorescens diets. The study indicated that the optimum level of P. fluorescens required for C. gariepinus growth and white blood cells formation is 10⁷ CFU g⁻¹, while carcass protein deposition required 10⁵ CFU g⁻¹ of P. fluorescens concentration. The study also confirmed P. fluorescens as efficient probiotics that is capable of improving the immune response of C. gariepinus against the attack of a virulent fish pathogen, C. botulinum.

Keywords: Clarias gariepinus, Clostridium botulinum, probiotics, Pseudomonas fluorescens

Procedia PDF Downloads 119
87 Structural Health Assessment of a Masonry Bridge Using Wireless

Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep

Abstract:

Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.

Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies

Procedia PDF Downloads 140
86 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 126
85 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton

Authors: Bing Chen, Xiang Ni, Eric Li

Abstract:

With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.

Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton

Procedia PDF Downloads 76
84 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 137
83 Endocrine Therapy Resistance and Epithelial to Mesenchymal Transition Inhibits by INT3 & Quercetin in MCF7 Cell Lines

Authors: D. Pradhan, G. Tripathy, S. Pradhan

Abstract:

Objectives: Imperviousness gainst estrogen treatments is a noteworthy reason for infection backslide and mortality in estrogen receptor alpha (ERα)- positive breast diseases. Tamoxifen or estrogen withdrawal builds the reliance of breast malignancy cells on INT3 flagging. Here, we researched the commitment of Quercetin and INT3 motioning in endocrine-safe breast tumor cells. Methods: We utilized two models of endocrine treatments safe (ETR) breast tumor: Tamoxifen-safe (TamR) and long haul estrogen-denied (LTED) MCF7 cells. We assessed the transitory and intrusive limit of these cells by Transwell cells. Articulation of epithelial to mesenchymal move (EMT) controllers and in addition INT3 receptors and targets were assessed by constant PCR and western smudge investigation. Besides, we tried in-vitro hostile to Quercetin monoclonal Antibodies (mAbs) and Gamma Secretase Inhibitors (GSIs) as potential EMT inversion remedial specialists. At last, we created stable Quercetin overexpressing MCF7 cells and assessed their EMT components and reaction to Tamoxifen. Results: We found that ETR cells procured an Epithelial to Mesenchymal move (EMT) phenotype and showed expanded levels of Quercetin and INT3 targets. Interestingly, we distinguished more elevated amount of INT3 however lower levels of INT1 and INT3 proposing a change to motioning through distinctive INT3 receptors after obtaining of resistance. Against Quercetin monoclonal antibodies and the GSI PF03084014 were powerful in obstructing the Quercetin/INT3 pivot and in part repressing the EMT process. As a consequence of this, cell relocation and attack were weakened and the immature microorganism like populace was essentially decreased. Hereditary hushing of Quercetin and INT3 prompted proportionate impacts. At long last, stable overexpression of Quercetin was adequate to make MCF7 lethargic to Tamoxifen by INT3 initiation. Conclusions: ETR cells express abnormal amounts of Quercetin and INT3, whose actuation eventually drives intrusive conduct. Hostile to Quercetin mAbs and GSI PF03084014 lessen articulation of EMT particles decreasing cell obtrusiveness. Quercetin overexpression instigates Tamoxifen resistance connected to obtaining of EMT phenotype. Our discovering propose that focusing on Quercetin and INT3 warrants further clinical Correlation as substantial restorative methodologies in endocrine-safe breast.

Keywords: endocrine, epithelial, mesenchymal, INT3, quercetin, MCF7

Procedia PDF Downloads 279
82 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review

Authors: Qiyao Han, Xianhai Meng

Abstract:

Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.

Keywords: fractal, urban infrastructure, sustainability, system-level resilience

Procedia PDF Downloads 244
81 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 258
80 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 47
79 Deployment of Armed Soldiers in European Cities as a Source of Insecurity among Czech Population

Authors: Blanka Havlickova

Abstract:

In the last ten years, there are growing numbers of troops with machine guns serving on streets of European cities. We can see them around government buildings, major transport hubs, synagogues, galleries and main tourist landmarks. As the main purpose of armed soldier’s presence in European cities authorities declare the prevention of terrorist attacks and psychological support for tourists and domestic population. The main objective of the following study is to find out whether the deployment of armed soldiers in European cities has a calming and reassuring effect on Czech citizens (if the presence at armed soldiers make the Czech population feel more secure) or rather becomes a stress factor (the presence of soldiers standing guard in full military fatigues recalls serious criminality and terrorist attacks which are reflected in the fears and insecurity of Czech population). The initial hypothesis of this study is connected with the priming theory, the idea that when we are exposed to an image (armed soldier), it makes us unconsciously focus on a topic connected with this image (terrorism). This paper is based on a quantitative public survey, which was carried out in the form of electronic questioning among the citizens of the Czech Republic. Respondents answered 14 questions about two European cities – London and Paris. Besides general questions investigating the respondents' awareness of these cities, some of the questions focused on the fear that the respondents had when picturing themselves leaving next Monday for the given city (London or Paris). The questions asking about respondent´s travel fears and concerns were accompanied by different photos. When answering the question about fear some respondents have been presented with a photo of Westminster Palace and the Eiffel with ordinary citizens while other respondents have been presented with a picture of the Westminster Palace, the and Eiffel's tower not only with ordinary citizens, but also with one soldier holding a machine gun. The main goal of this paper is to analyse and compare data about concerns for these two groups of respondents (presented with different pictures) and find out if and how an armed soldier with a machine gun in front of the Westminster Palace or the Eiffel Tower affects the public's concerns about visiting the site. In other words, the aim of this paper is to confirm or rebut the hypothesis that the look at a soldier with a machine gun in front of the Eiffel Tower or the Westminster Palace automatically triggers the association with a terrorist attack leading to an increase in fear and insecurity among Czech population.

Keywords: terrorism, security measures, priming, risk perception

Procedia PDF Downloads 225
78 A Battle of Identity(ies): Deconstructing Spaces of Belonging in Saleem Haddad’s Guapa and Hasan Namir’s God in Pink

Authors: Nour Aladdin

Abstract:

This paper explores the interconnectedness of belonging, space, and identity in Anglo Arab literature, particularly Saleem Haddad’s Guapa and Hasan Namir’sGod in Pink. This paper suggest that Rasa and Ramy, the queer Arab characters respectively, do not belong in either the Middle East or the West. Using Amin Maalouf’s analysis of the Arab identity, specifically his argument that an individual identifies strongly with the aspect of their identity that is under attack, this paper argues that all of Rasa and Ramy’s spaces are politically charged - a term that denotes that all values and beliefs instilled in Arabs and their spaces are heavily influenced by Arab politics, culture, and, often times religion. Therefore, the politically charged environments Rasa and Ramy inhabit will always be against one part of their identity, which is why they cannot identify as queer and Arab simultaneously. For Rasa, the unnamed Middle Eastern country, his home environment, as well as the so-called safe space nightclub, condemn his queerness, leading him to connect more to his sexual orientation. However, Rasa associates himself with his Arab roots when he migrates to America, a different form of politically charged space that minoritizes his ethnicity. Similarly, Ramy’s spaces are naturally religiopolitical after Islam heightened in Iraq during the Iraq War; as a result, Ramy’s home environment, Sheikh Ammar’s house, the mosque, and the nightclub are influenced by the religiopolitics and bombard his ability to identify as not only a queer Arab but a queer Arab Muslim. Ultimately, because Rasa and Ramy are constantly in movement, their identity attributes are also in movement. This paper is divided into three sections. The first section focuses on Guapa and the Arab Spring’s politics, mainly its influence on queer Arabs in and around the Middle East. Drawing from a number of queer and Arab gender theories, I analyze all of Rasa’s spaces as politically charged that prevent him from the means to be queer and Arab. The second section examines God in Pink in close connection to the 2003 invasion of Iraq. Ramy’s spaces are religiopolitically charged, that prevent him to embrace all of his identity attributes – nationality, ethnicity, sexual orientation, and religious affiliation – concomitantly. The last section considers the rapid use of technology and social media in the Middle East as a means to provide deviant heterotopic spaces for queer Arabs. With the rise of subtle and covert queer heterotopias, there is a slow and steady shift of queer tolerance in the Arab world.

Keywords: belonging, identity, spaces, queer, arabness, middle east, orientalism

Procedia PDF Downloads 79
77 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 192
76 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 143
75 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 362
74 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 49
73 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression

Authors: Anjali Roy, Mirza S. Baig

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.

Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL

Procedia PDF Downloads 98
72 Gluten Intolerance, Celiac Disease, and Neuropsychiatric Disorders: A Translational Perspective

Authors: Jessica A. Hellings, Piyushkumar Jani

Abstract:

Background: Systemic autoimmune disorders are increasingly implicated in neuropsychiatric illness, especially in the setting of treatment resistance in individuals of all ages. Gluten allergy in fullest extent results in celiac disease, affecting multiple organs including central nervous system (CNS). Clinicians often lack awareness of the association between neuropsychiatric illness and gluten allergy, partly since many such research studies are published in immunology and gastroenterology journals. Methods: Following a Pubmed literature search and online searches on celiac disease websites, 40 articles are critically reviewed in detail. This work reviews celiac disease, gluten intolerance and current evidence of their relationship to neuropsychiatric and systemic illnesses. The review also covers current work-up and diagnosis, as well as dietary interventions, gluten restriction outcomes, and future research directions. Results: Gluten allergy in susceptible individuals damages the small intestine, producing a leaky gut and malabsorption state, as well as allowing antibodies into the bloodstream, which attack major organs. Lack of amino acid precursors for neurotransmitter synthesis together with antibody-associated brain changes and hypoperfusion may result in neuropsychiatric illness. This is well documented; however, studies in neuropsychiatry are often small. In the large CATIE trial, subjects with schizophrenia had significantly increased antibodies to tissue transglutaminase (TTG), and antigliadin antibodies, both significantly greater gluten antibodies than in control subjects. On later follow up, TTG-6 antibodies were identified in these subjects’ brains but not in their intestines. Significant evidence mostly from small studies also exists for gluten allergy and celiac-related depression, anxiety disorders, attention-deficit/hyperactivity disorder, autism spectrum disorders, ataxia, and epilepsy. Dietary restriction of gluten resulted in remission in several published cases, including for treatment-resistant schizophrenia. Conclusions: Ongoing and larger studies are needed of the diagnosis and treatment efficacy of the gluten-free diet in neuropsychiatric illness. Clinicians should ask about the patient history of anemia, hypothyroidism, irritable bowel syndrome and family history of benefit from the gluten-free diet, not limited to but especially in cases of treatment resistance. Obtaining gluten antibodies by a simple blood test, and referral for gastrointestinal work-up in positive cases should be considered.

Keywords: celiac, gluten, neuropsychiatric, translational

Procedia PDF Downloads 134