Search results for: grid synchronization technique
6913 Signal Processing of Barkhausen Noise Signal for Assessment of Increasing Down Feed in Surface Ground Components with Poor Micro-Magnetic Response
Authors: Tanmaya Kumar Dash, Tarun Karamshetty, Soumitra Paul
Abstract:
The Barkhausen Noise Analysis (BNA) technique has been utilized to assess surface integrity of steels. But the BNA technique is not very successful in evaluating surface integrity of ground steels that exhibit poor micro-magnetic response. A new approach has been proposed for the processing of BN signal with Fast Fourier transforms while Wavelet transforms has been used to remove noise from the BN signal, with judicious choice of the ‘threshold’ value, when the micro-magnetic response of the work material is poor. In the present study, the effect of down feed induced upon conventional plunge surface grinding of hardened bearing steel has been investigated along with an ultrasonically cleaned, wet polished and a sample ground with spark out technique for benchmarking. Moreover, the FFT analysis has been established, at different sets of applied voltages and applied frequency and the pattern of the BN signal in the frequency domain is analyzed. The study also depicts the wavelet transforms technique with different levels of decomposition and different mother wavelets, which has been used to reduce the noise value in BN signal of materials with poor micro-magnetic response, in order to standardize the procedure for all BN signals depending on the frequency of the applied voltage.Keywords: barkhausen noise analysis, grinding, magnetic properties, signal processing, micro-magnetic response
Procedia PDF Downloads 6676912 The Effect of Parameters on Production of NİO/Al2O3/B2O3/SiO2 Composite Nanofibers by Using Sol-Gel Processing and Electrospinning Technique
Authors: F. Sevim, E. Sevimli, F. Demir, T. Çalban
Abstract:
For the first time, nanofibers of PVA /nickel nitrate/silica/alumina izopropoxide/boric acid composite were prepared by using sol-gel processing and electrospinning technique. By high temperature calcinations of the above precursor fibers, nanofibers of NiO/Al2O3/B2O3/SiO2 composite with diameters of 500 nm could be successfully obtained. The fibers were characterized by TG/DTA, FT-IR, XRD and SEM analyses.Keywords: nano fibers, NiO/Al2O3/B2O3/SiO2 composite, sol-gel processing, electro spinning
Procedia PDF Downloads 3376911 Multilayer Neural Network and Fuzzy Logic Based Software Quality Prediction
Authors: Sadaf Sahar, Usman Qamar, Sadaf Ayaz
Abstract:
In the software development lifecycle, the quality prediction techniques hold a prime importance in order to minimize future design errors and expensive maintenance. There are many techniques proposed by various researchers, but with the increasing complexity of the software lifecycle model, it is crucial to develop a flexible system which can cater for the factors which in result have an impact on the quality of the end product. These factors include properties of the software development process and the product along with its operation conditions. In this paper, a neural network (perceptron) based software quality prediction technique is proposed. Using this technique, the stakeholders can predict the quality of the resulting software during the early phases of the lifecycle saving time and resources on future elimination of design errors and costly maintenance. This technique can be brought into practical use using successful training.Keywords: software quality, fuzzy logic, perception, prediction
Procedia PDF Downloads 3176910 Application of the Global Optimization Techniques to the Optical Thin Film Design
Authors: D. Li
Abstract:
Optical thin films are used in a wide variety of optical components and there are many software tools programmed for advancing multilayer thin film design. The available software packages for designing the thin film structure may not provide optimum designs. Normally, almost all current software programs obtain their final designs either from optimizing a starting guess or by technique, which may or may not involve a pseudorandom process, that give different answers every time, depending upon the initial conditions. With the increasing power of personal computers, functional methods in optimization and synthesis of optical multilayer systems have been developed such as DGL Optimization, Simulated Annealing, Genetic Algorithms, Needle Optimization, Inductive Optimization and Flip-Flop Optimization. Among these, DGL Optimization has proved its efficiency in optical thin film designs. The application of the DGL optimization technique to the design of optical coating is presented. A DGL optimization technique is provided, and its main features are discussed. Guidelines on the application of the DGL optimization technique to various types of design problems are given. The innovative global optimization strategies used in a software tool, OnlyFilm, to optimize multilayer thin film designs through different filter designs are outlined. OnlyFilm is a powerful, versatile, and user-friendly thin film software on the market, which combines optimization and synthesis design capabilities with powerful analytical tools for optical thin film designers. It is also the only thin film design software that offers a true global optimization function.Keywords: optical coatings, optimization, design software, thin film design
Procedia PDF Downloads 3166909 Evaluation of Transfer Capability Considering Uncertainties of System Operating Condition and System Cascading Collapse
Authors: Nur Ashida Salim, Muhammad Murtadha Othman, Ismail Musirin, Mohd Salleh Serwan
Abstract:
Over the past few decades, the power system industry in many developing and developed countries has gone through a restructuring process of the industry where they are moving towards a deregulated power industry. This situation will lead to competition among the generation and distribution companies to achieve a certain objective which is to provide quality and efficient production of electric energy, which will reduce the price of electricity. Therefore it is important to obtain an accurate value of the Available Transfer Capability (ATC) and Transmission Reliability Margin (TRM) in order to ensure the effective power transfer between areas during the occurrence of uncertainties in the system. In this paper, the TRM and ATC is determined by taking into consideration the uncertainties of the system operating condition and system cascading collapse by applying the bootstrap technique. A case study of the IEEE RTS-79 is employed to verify the robustness of the technique proposed in the determination of TRM and ATC.Keywords: available transfer capability, bootstrap technique, cascading collapse, transmission reliability margin
Procedia PDF Downloads 4086908 Performance of Segmented Thermoelectric Materials Using 'Open-Short Circuit' Technique under Different Polarity
Authors: N. H. S. Mustafa, N. M. Yatim
Abstract:
Thermoelectric materials arrange in segmented design could increase the conversion of heat to electricity performance. This is due to the properties of materials that perform peak at narrow temperature range. Performance of the materials determines by dimensionless figure-of-merit, ZT which consist of thermoelectric properties namely Seebeck coefficient, electrical resistivity, and thermal conductivity. Since different materials were arrange in segmented, determination of ZT cannot be measured using the conventional approach. Therefore, this research used 'open-short circuit' technique to measure the segmented performance. Segmented thermoelectric materials consist of bismuth telluride, and lead telluride was segmented together under cold press technique. The results show thermoelectric properties measured is comparable with calculated based on commercially available of individual material. Performances of segmented sample under different polarity also indicate dependability of material with position and temperature. Segmented materials successfully measured under real condition and optimization of the segmented can be designed from the study of polarity change.Keywords: thermoelectric, segmented, ZT, polarity, performance
Procedia PDF Downloads 2026907 Wall Shear Stress Under an Impinging Planar Jet Using the Razor Blade Technique
Authors: A. Ritcey, J. R. Mcdermid, S. Ziada
Abstract:
Wall shear stress was experimentally measured under a planar impinging air jet as a function of jet Reynolds number (Rejet = 5000, 8000, 11000) and different normalized impingement distances (H/D = 4, 6, 8, 10, 12) using the razor blade technique to complete a parametric study. The wall pressure, wall pressure gradient, and wall shear stress information were obtained.Keywords: experimental fluid mechanics, impinging planar jets, skin friction factor, wall shear stress
Procedia PDF Downloads 3226906 Nighttime Power Generation Using Thermoelectric Devices
Authors: Abdulrahman Alajlan
Abstract:
While the sun serves as a robust energy source, the frigid conditions of outer space present promising prospects for nocturnal power generation due to its continuous accessibility during nighttime hours. This investigation illustrates a proficient methodology facilitating uninterrupted energy capture throughout the day. This method involves the utilization of water-based heat storage systems and radiative thermal emitters implemented across thermometric devices. Remarkably, this approach permits an enhancement of nighttime power generation that exceeds the level of 1 Wm-2, which is unattainable by alternative methodologies. Outdoor experiments conducted at the King Abdulaziz City for Science and Technology (KACST) have demonstrated unparalleled performance, surpassing prior experimental benchmarks by nearly an order of magnitude. Furthermore, the developed device exhibits the capacity to concurrently supply power to multiple light-emitting diodes, thereby showcasing practical applications for nighttime power generation. This research unveils opportunities for the creation of scalable and efficient 24-hour power generation systems based on thermoelectric devices. Central findings from this study encompass the realization of continuous 24-hour power generation from clean and sustainable energy sources. Theoretical analyses indicate the potential for nighttime power generation reaching up to 1 Wm-2, while experimental results have reached nighttime power generation at a density of 0.5 Wm-2. Additionally, the efficiency of multiple light-emitting diodes (LEDs) has been evaluated when powered by the nighttime output of the integrated thermoelectric generator (TEG). Therefore, this methodology exhibits promise for practical applications, particularly in lighting, marking a pivotal advancement in the utilization of renewable energy for both on-grid and off-grid scenarios.Keywords: nighttime power generation, thermoelectric devices, radiative cooling, thermal management
Procedia PDF Downloads 606905 LIFirr with an Indicator of Microbial Activity in Paraffinic Oil
Authors: M. P. Casiraghi, C. M. Quintella, P. Almeida
Abstract:
Paraffinic oils were submitted to microbial action. The microorganisms consisted of bacteria of the genera Pseudomonas sp and Bacillus lincheniforms. The alterations in interfacial tension were determined using a tensometer and applying the hanging drop technique at room temperature (299 K ±275 K). The alteration in the constitution of the paraffins was evaluated by means of gas chromatography. The microbial activity was observed to reduce interfacial tension by 54 to 78%, as well as consuming the paraffins C19 to C29 and producing paraffins C36 to C44. The LIFirr technique made it possible to determine the microbial action quickly.Keywords: paraffins, biosurfactants, LIFirr, microbial activity
Procedia PDF Downloads 5256904 Optimization of Doubly Fed Induction Generator Equivalent Circuit Parameters by Direct Search Method
Authors: Mamidi Ramakrishna Rao
Abstract:
Doubly-fed induction generator (DFIG) is currently the choice for many wind turbines. These generators, when connected to the grid through a converter, is subjected to varied power system conditions like voltage variation, frequency variation, short circuit fault conditions, etc. Further, many countries like Canada, Germany, UK, Scotland, etc. have distinct grid codes relating to wind turbines. Accordingly, following the network faults, wind turbines have to supply a definite reactive current. To satisfy the requirements including reactive current capability, an optimum electrical design becomes a mandate for DFIG to function. This paper intends to optimize the equivalent circuit parameters of an electrical design for satisfactory DFIG performance. Direct search method has been used for optimization of the parameters. The variables selected include electromagnetic core dimensions (diameters and stack length), slot dimensions, radial air gap between stator and rotor and winding copper cross section area. Optimization for 2 MW DFIG has been executed separately for three objective functions - maximum reactive power capability (Case I), maximum efficiency (Case II) and minimum weight (Case III). In the optimization analysis program, voltage variations (10%), power factor- leading and lagging (0.95), speeds for corresponding to slips (-0.3 to +0.3) have been considered. The optimum designs obtained for objective functions were compared. It can be concluded that direct search method of optimization helps in determining an optimum electrical design for each objective function like efficiency or reactive power capability or weight minimization.Keywords: direct search, DFIG, equivalent circuit parameters, optimization
Procedia PDF Downloads 2566903 Molecular Diagnosis of Influenza Strains Was Carried Out on Patients of the Social Security Clinic in Karaj Using the RT-PCR Technique
Authors: A. Ferasat, S. Rostampour Yasouri
Abstract:
Seasonal flu is a highly contagious infection caused by influenza viruses. These viruses undergo genetic changes that result in new epidemics across the globe. Medical attention is crucial in severe cases, particularly for the elderly, frail, and those with chronic illnesses, as their immune systems are often weaker. The purpose of this study was to detect new subtypes of the influenza A virus rapidly using a specific RT-PCR method based on the HA gene (hemagglutinin). In the winter and spring of 2022_2023, 120 embryonated egg samples were cultured, suspected of seasonal influenza. RNA synthesis, followed by cDNA synthesis, was performed. Finally, the PCR technique was applied using a pair of specific primers designed based on the HA gene. The PCR product was identified after purification, and the nucleotide sequence of purified PCR products was compared with the sequences in the gene bank. The results showed a high similarity between the sequence of the positive samples isolated from the patients and the sequence of the new strains isolated in recent years. This RT-PCR technique is entirely specific in this study, enabling the detection and multiplication of influenza and its subspecies from clinical samples. The RT-PCR technique based on the HA gene, along with sequencing, is a fast, specific, and sensitive diagnostic method for those infected with influenza viruses and its new subtypes. Rapid molecular diagnosis of influenza is essential for suspected people to control and prevent the spread of the disease to others. It also prevents the occurrence of secondary (sometimes fatal) pneumonia that results from influenza and pathogenic bacteria. The critical role of rapid diagnosis of new strains of influenza is to prepare a drug vaccine against the latest viruses that did not exist in the community last year and are entirely new viruses.Keywords: influenza, molecular diagnosis, patients, RT-PCR technique
Procedia PDF Downloads 746902 Bio-Grouting Applications in Caprock Sealing for Geological CO2 Storage
Authors: Guijie Sang, Geo Davis, Momchil Terziev
Abstract:
Geological CO2 storage has been regarded as a promising strategy to mitigate the emission of greenhouse gas generated from traditional power stations and energy-intensive industry. Caprocks with very low permeability and ultra-fine pores create viscous and capillary barriers to guarantee CO2 sealing efficiency. However, caprock fractures, either naturally existing or artificially induced due to injection, could provide preferential paths for CO₂ escaping. Seeking an efficient technique to seal and strengthen caprock fractures is crucial. We apply microbial-induced-calcite-precipitation (MICP) technique for sealing and strengthening caprock fractures in the laboratory scale. The MICP bio-grouting technique has several advantages over conventional cement grouting methods, including its low viscosity, micron-size microbes (accessible to fine apertures), and low carbon footprint, among others. Different injection strategies are tested to achieve relatively homogenous calcite precipitation along the fractures, which is monitored dynamically based on laser ultrasonic technique. The MICP process in caprock fractures, which integrates the coupled flow and bio-chemical precipitation, is also modeled and validated through the experiment. The study could provide an effective bio-mediated grouting strategy for caprock sealing and thus ensuring a long-term safe geological CO2 storage.Keywords: caprock sealing, geological CO2 storage, grouting strategy, microbial induced calcite precipitation
Procedia PDF Downloads 1896901 Fabrication of Wearable Antennas through Thermal Deposition
Authors: Jeff Letcher, Dennis Tierney, Haider Raad
Abstract:
Antennas are devices for transmitting and/or receiving signals which make them a necessary component of any wireless system. In this paper, a thermal deposition technique is utilized as a method to fabricate antenna structures on substrates. Thin-film deposition is achieved by evaporating a source material (metals in our case) in a vacuum which allows vapor particles to travel directly to the target substrate which is encased with a mask that outlines the desired structure. The material then condenses back to solid state. This method is used in comparison to screen printing, chemical etching, and ink jet printing to indicate advantages and disadvantages to the method. The antenna created undergoes various testing of frequency ranges, conductivity, and a series of flexing to indicate the effectiveness of the thermal deposition technique. A single band antenna that is operated at 2.45 GHz intended for wearable and flexible applications was successfully fabricated through this method and tested. It is concluded that thermal deposition presents a feasible technique of producing such antennas.Keywords: thermal deposition, wearable antennas, bluetooth technology, flexible electronics
Procedia PDF Downloads 2826900 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections
Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos
Abstract:
Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.Keywords: image quality, microscopy research, staining technique, ultra thin section
Procedia PDF Downloads 4326899 Phishing Detection: Comparison between Uniform Resource Locator and Content-Based Detection
Authors: Nuur Ezaini Akmar Ismail, Norbazilah Rahim, Norul Huda Md Rasdi, Maslina Daud
Abstract:
A web application is the most targeted by the attacker because the web application is accessible by the end users. It has become more advantageous to the attacker since not all the end users aware of what kind of sensitive data already leaked by them through the Internet especially via social network in shake on ‘sharing’. The attacker can use this information such as personal details, a favourite of artists, a favourite of actors or actress, music, politics, and medical records to customize phishing attack thus trick the user to click on malware-laced attachments. The Phishing attack is one of the most popular attacks for social engineering technique against web applications. There are several methods to detect phishing websites such as Blacklist/Whitelist based detection, heuristic-based, and visual similarity-based detection. This paper illustrated a comparison between the heuristic-based technique using features of a uniform resource locator (URL) and visual similarity-based detection techniques that compares the content of a suspected phishing page with the legitimate one in order to detect new phishing sites based on the paper reviewed from the past few years. The comparison focuses on three indicators which are false positive and negative, accuracy of the method, and time consumed to detect phishing website.Keywords: heuristic-based technique, phishing detection, social engineering and visual similarity-based technique
Procedia PDF Downloads 1776898 Accuracy of Peak Demand Estimates for Office Buildings Using Quick Energy Simulation Tool
Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett
Abstract:
The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, U.S. NJDMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.Keywords: building energy modeling, eQUEST, peak demand, smart meters
Procedia PDF Downloads 686897 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 3816896 The Effectiveness of Energy Index Technique in Bearing Condition Monitoring
Authors: Faisal Alshammari, Abdulmajid Addali, Mosab Alrashed, Taihiret Alhashan
Abstract:
The application of acoustic emission techniques is gaining popularity, as it can monitor the condition of gears and bearings and detect early symptoms of a defect in the form of pitting, wear, and flaking of surfaces. Early detection of these defects is essential as it helps to avoid major failures and the associated catastrophic consequences. Signal processing techniques are required for early defect detection – in this article, a time domain technique called the Energy Index (EI) is used. This article presents an investigation into the Energy Index’s effectiveness to detect early-stage defect initiation and deterioration, and compares it with the common r.m.s. index, Kurtosis, and the Kolmogorov-Smirnov statistical test. It is concluded that EI is a more effective technique for monitoring defect initiation and development than other statistical parameters.Keywords: acoustic emission, signal processing, kurtosis, Kolmogorov-Smirnov test
Procedia PDF Downloads 3666895 Teaching–Learning-Based Optimization: An Efficient Method for Chinese as a Second Language
Authors: Qi Wang
Abstract:
In the classroom, teachers have been trained to complete the target task within the limited lecture time, meanwhile learners need to receive a lot of new knowledge, however, most of the time the learners come without the proper pre-class preparation to efficiently take in the contents taught in class. Under this circumstance, teachers do have no time to check whether the learners fully understand the content or not, how the learners communicate in the different contexts, until teachers see the results when the learners are tested. In the past decade, the teaching of Chinese has taken a trend. Teaching focuses less on the use of proper grammatical terms/punctuation and is now placing a heavier focus on the materials from real life contexts. As a result, it has become a greater challenge to teachers, as this requires teachers to fully understand/prepare what they teach and explain the content with simple and understandable words to learners. On the other hand, the same challenge also applies to the learners, who come from different countries. As they have to use what they learnt, based on their personal understanding of the material to effectively communicate with others in the classroom, even in the contexts of a day to day communication. To reach this win-win stage, Feynman’s Technique plays a very important role. This practical report presents you how the Feynman’s Technique is applied into Chinese courses, both writing & oral, to motivate the learners to practice more on writing, reading and speaking in the past few years. Part 1, analysis of different teaching styles and different types of learners, to find the most efficient way to both teachers and learners. Part 2, based on the theory of Feynman’s Technique, how to let learners build the knowledge from knowing the name of something to knowing something, via different designed target tasks. Part 3. The outcomes show that Feynman’s Technique is the interaction of learning style and teaching style, the double-edged sword of Teaching & Learning Chinese as a Second Language.Keywords: Chinese, Feynman’s technique, learners, teachers
Procedia PDF Downloads 1546894 Polymorphic Positions, Haplotypes, and Mutations Detected In The Mitochonderial DNA Coding Region By Sanger Sequence Technique
Authors: Imad H. Hameed, Mohammad A. Jebor, Ammera J. Omer
Abstract:
The aim of this research is to study the mitochonderial coding region by using the Sanger sequencing technique and establish the degree of variation characteristic of a fragment. FTA® Technology (FTA™ paper DNA extraction) utilized to extract DNA. Portion of coding region encompassing positions 11719 –12384 amplified in accordance with the Anderson reference sequence. PCR products purified by EZ-10 spin column then sequenced and Detected by using the ABI 3730xL DNA Analyzer. Five new polymorphic positions 11741, 11756, 11878, 11887 and 12133 are described may be suitable sources for identification purpose in future. The calculated value D= 0.95 and RMP=0.048 of the genetic diversity should be understood as high in the context of coding function of the analysed DNA fragment. The relatively high gene diversity and a relatively low random match probability were observed in Iraq population. The obtained data can be used to identify the variable nucleotide positions characterized by frequent occurrence which is most promising for various identifications.Keywords: coding region, Iraq, mitochondrial DNA, polymorphic positions, sanger technique
Procedia PDF Downloads 4376893 A Unique Exact Approach to Handle a Time-Delayed State-Space System: The Extraction of Juice Process
Authors: Mohamed T. Faheem Saidahmed, Ahmed M. Attiya Ibrahim, Basma GH. Elkilany
Abstract:
This paper discusses the application of Time Delay Control (TDC) compensation technique in the juice extraction process in a sugar mill. The objective is to improve the control performance of the process and increase extraction efficiency. The paper presents the mathematical model of the juice extraction process and the design of the TDC compensation controller. Simulation results show that the TDC compensation technique can effectively suppress the time delay effect in the process and improve control performance. The extraction efficiency is also significantly increased with the application of the TDC compensation technique. The proposed approach provides a practical solution for improving the juice extraction process in sugar mills using MATLAB Processes.Keywords: time delay control (TDC), exact and unique state space model, delay compensation, Smith predictor.
Procedia PDF Downloads 926892 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology
Authors: A. Anastasiou, K. S. Tingay
Abstract:
Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.Keywords: data reuse, data discovery, data linkage, journal articles, text mining
Procedia PDF Downloads 1156891 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation
Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri
Abstract:
In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation
Procedia PDF Downloads 3196890 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System
Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng
Abstract:
Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging
Procedia PDF Downloads 3666889 A Systematic Review of Business Strategies Which Can Make District Heating a Platform for Sustainable Development of Other Sectors
Authors: Louise Ödlund, Danica Djuric Ilic
Abstract:
Sustainable development includes many challenges related to energy use, such as (1) developing flexibility on the demand side of the electricity systems due to an increased share of intermittent electricity sources (e.g., wind and solar power), (2) overcoming economic challenges related to an increased share of renewable energy in the transport sector, (3) increasing efficiency of the biomass use, (4) increasing utilization of industrial excess heat (e.g., approximately two thirds of the energy currently used in EU is lost in the form of excess and waste heat). The European Commission has been recognized DH technology as of essential importance to reach sustainability. Flexibility in the fuel mix, and possibilities of industrial waste heat utilization, combined heat, and power (CHP) production and energy recovery through waste incineration, are only some of the benefits which characterize DH technology. The aim of this study is to provide an overview of the possible business strategies which would enable DH to have an important role in future sustainable energy systems. The methodology used in this study is a systematic literature review. The study includes a systematic approach where DH is seen as a part of an integrated system that consists of transport , industrial-, and electricity sectors as well. The DH technology can play a decisive role in overcoming the sustainability challenges related to our energy use. The introduction of biofuels in the transport sector can be facilitated by integrating biofuel and DH production in local DH systems. This would enable the development of local biofuel supply chains and reduce biofuel production costs. In this way, DH can also promote the development of biofuel production technologies that are not yet developed. Converting energy for running the industrial processes from fossil fuels and electricity to DH (above all biomass and waste-based DH) and delivering excess heat from industrial processes to the local DH systems would make the industry less dependent on fossil fuels and fossil fuel-based electricity, as well as the increasing energy efficiency of the industrial sector and reduce production costs. The electricity sector would also benefit from these measures. Reducing the electricity use in the industry sector while at the same time increasing the CHP production in the local DH systems would (1) replace fossil-based electricity production with electricity in biomass- or waste-fueled CHP plants and reduce the capacity requirements from the national electricity grid (i.e., it would reduce the pressure on the bottlenecks in the grid). Furthermore, by operating their central controlled heat pumps and CHP plants depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid.Keywords: energy system, district heating, sustainable business strategies, sustainable development
Procedia PDF Downloads 1696888 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique
Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak
Abstract:
The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method
Procedia PDF Downloads 1796887 Performance Comparison of Droop Control Methods for Parallel Inverters in Microgrid
Authors: Ahmed Ismail, Mustafa Baysal
Abstract:
Although the energy source in the world is mainly based on fossil fuels today, there is a need for alternative energy generation systems, which are more economic and environmentally friendly, due to continuously increasing demand of electric energy and lacking power resources and networks. Distributed Energy Resources (DERs) such as fuel cells, wind and solar power have recently become widespread as alternative generation. In order to solve several problems that might be encountered when integrating DERs to power system, the microgrid concept has been proposed. A microgrid can operate both grid connected and island mode to benefit both utility and customers. For most distributed energy resources (DER) which are connected in parallel in LV-grid like micro-turbines, wind plants, fuel cells and PV cells electrical power is generated as a direct current (DC) and converted to an alternative currents (AC) by inverters. So the inverters are assumed to be primary components in a microgrid. There are many control techniques of parallel inverters to manage active and reactive sharing of the loads. Some of them are based on droop method. In literature, the studies are usually focused on improving the transient performance of inverters. In this study, the performance of two different controllers based on droop control method is compared for the inverters operated in parallel without any communication feedback. For this aim, a microgrid in which inverters are controlled by conventional droop controller and modified droop controller is designed. Modified controller is obtained by adding PID into conventional droop control. Active and reactive power sharing performance, voltage and frequency responses of those control methods are measured in several operational cases. Study cases have been simulated by MATLAB-SIMULINK.Keywords: active and reactive power sharing, distributed generation, droop control, microgrid
Procedia PDF Downloads 5926886 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis
Authors: Nayak Amar, Turner Naomi, Gobina Edward
Abstract:
The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting
Procedia PDF Downloads 1006885 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 756884 Spatiotemporal Variation Characteristics of Soil pH around the Balikesir City, Turkey
Authors: Çağan Alevkayali, Şermin Tağil
Abstract:
Determination of soil pH surface distribution in urban areas is substantial for sustainable development. Changes on soil properties occur due to functions on performed in agriculture, industry and other urban functions. Soil pH is important to effect on soil productivity which based on sensitive and complex relation between plant and soil. Furthermore, the spatial variability of soil reaction is necessary to measure the effects of urbanization. The objective of this study was to explore the spatial variation of soil pH quality and the influence factors of human land use on soil Ph around Balikesir City using data for 2015 and Geographic Information Systems (GIS). For this, soil samples were taken from 40 different locations, and collected with the method of "Systematic Random" from the pits at 0-20 cm depths, because anthropologic sourced pollutants accumulate on upper layers of soil. The study area was divided into a grid system with 750 x 750 m. GPS was used to determine sampling locations, and Inverse Distance Weighting (IDW) interpolation technique was used to analyze the spatial distribution of pH in the study area and to predict the variable values of un-exampled places with the help from the values of exampled places. Natural soil acidity and alkalinity depend on interaction between climate, vegetation, and soil geological properties. However, analyzing soil pH is important to indirectly evaluate soil pollution caused by urbanization and industrialization. The result of this study showed that soil pH around the Balikesir City was neutral, in generally, with values were between 6.5 and 7.0. On the other hand, some slight changes were demonstrated around open dump areas and the small industrial sites. The results obtained from this study can be indicator of important soil problems and this data can be used by ecologists, planners and managers to protect soil supplies around the Balikesir City.Keywords: Balikesir, IDW, GIS, spatial variability, soil pH, urbanization
Procedia PDF Downloads 322