Search results for: Michael Delves
37 Magneto-Thermo-Mechanical Analysis of Electromagnetic Devices Using the Finite Element Method
Authors: Michael G. Pantelyat
Abstract:
Fundamental basics of pure and applied research in the area of magneto-thermo-mechanical numerical analysis and design of innovative electromagnetic devices (modern induction heaters, novel thermoelastic actuators, rotating electrical machines, induction cookers, electrophysical devices) are elaborated. Thus, mathematical models of magneto-thermo-mechanical processes in electromagnetic devices taking into account main interactions of interrelated phenomena are developed. In addition, graphical representation of coupled (multiphysics) phenomena under consideration is proposed. Besides, numerical techniques for nonlinear problems solution are developed. On this base, effective numerical algorithms for solution of actual problems of practical interest are proposed, validated and implemented in applied 2D and 3D computer codes developed. Many applied problems of practical interest regarding modern electrical engineering devices are numerically solved. Investigations of the influences of various interrelated physical phenomena (temperature dependences of material properties, thermal radiation, conditions of convective heat transfer, contact phenomena, etc.) on the accuracy of the electromagnetic, thermal and structural analyses are conducted. Important practical recommendations on the choice of rational structures, materials and operation modes of electromagnetic devices under consideration are proposed and implemented in industry.Keywords: Electromagnetic devices, multiphysics, numerical analysis, simulation and design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172136 Impact of Urbanization Growth on Disease Spread and Outbreak Response: Exploring Strategies for Enhancing Resilience
Authors: Raquel Vianna Duarte Cardoso, Eduarda Lobato Faria, José Jorge Boueri
Abstract:
Rapid urbanization has transformed the global landscape, presenting significant challenges to public health. This article delves into the impact of urbanization on the spread of infectious diseases in cities and identifies crucial strategies to enhance urban community resilience. Massive urbanization over recent decades has created conducive environments for the rapid spread of diseases due to population density, mobility, and unequal living conditions. Urbanization has been observed to increase exposure to pathogens and foster conditions conducive to disease outbreaks, including seasonal flu, vector-borne diseases, and respiratory infections. In order to tackle these issues, a range of cross-disciplinary approaches are suggested. These encompass the enhancement of urban healthcare infrastructure, emphasizing the need for robust investments in hospitals, clinics, and healthcare systems to keep pace with the burgeoning healthcare requirements in urban environments. Moreover, the establishment of disease monitoring and surveillance mechanisms is indispensable, as it allows for the timely detection of outbreaks, enabling swift responses. Additionally, community engagement and education play a pivotal role in advocating for personal hygiene, vaccination, and preventive measures, thus playing a pivotal role in diminishing disease transmission. Lastly, the promotion of sustainable urban planning, which includes the creation of cities with green spaces, access to clean water, and proper sanitation, can significantly mitigate the risks associated with waterborne and vector-borne diseases. The article is based on the analysis of scientific literature, and it offers a comprehensive insight into the complexities of the relationship between urbanization and health. It places a strong emphasis on the urgent need for integrated approaches to improve urban resilience in the face of health challenges.
Keywords: Infectious diseases dissemination, public health, urbanization impacts, urban resilience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8335 Climate Related Financial Risk for Automobile Industry and Impact to Financial Institutions
Authors: S. Mahalakshmi, B. Senthil Arasu
Abstract:
As per the recent changes happening in the global policies, climate related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate related changes can happen often and lead to risk and lot of uncertainty, but need to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed, so that the financial institutions can plan to mitigate it. Climate related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and others. And the models required to compute this have to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out to the suggestion that the climate related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries, instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, we present a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios, and how the different transition risks affect the risk associated with the different parties. This research paper delves on the topic of increase in concentration of greenhouse gases, that in turn causing global warming. It then considers the various scenarios of having the different risk drivers impacting credit and market risk of an institution, by understanding the transmission channels, and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II capital calculations, and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.
Keywords: Capital calculation, climate risk, credit risk, pillar II risk, scenario modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42334 Building Information Modeling and Its Application in the State of Kuwait
Authors: Michael Gerges, Ograbe Ahiakwo, Martin Jaeger, Ahmad Asaad
Abstract:
Recent advances of Building Information Modeling (BIM) especially in the Middle East have increased remarkably. Dubai has been taking a lead on this by making it mandatory for BIM to be adopted for all projects that involve complex architecture designs. This is because BIM is a dynamic process that assists all stakeholders in monitoring the project status throughout different project phases with great transparency. It focuses on utilizing information technology to improve collaboration among project participants during the entire life cycle of the project from the initial design, to the supply chain, resource allocation, construction and all productivity requirements. In view of this trend, the paper examines the extent of applying BIM in the State of Kuwait, by exploring practitioners’ perspectives on BIM, especially their perspectives on main barriers and main advantages. To this end structured interviews were carried out based on questionnaires and with a range of different construction professionals. The results revealed that practitioners perceive improved communication and mitigated project risks by encouraged collaboration between project participants. However, it was also observed that the full implementation of BIM in the State of Kuwait requires concerted efforts to make clients demanding BIM, counteract resistance to change among construction professionals and offer more training for design team members. This paper forms part of an on-going research effort on BIM and its application in the State of Kuwait and it is on this basis that further research on the topic is proposed.
Keywords: Building Information Modeling, BIM, construction industry, Kuwait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295333 Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia
Authors: Nevine M. Labib, Michael N. Malek
Abstract:
Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.Keywords: Data Mining, Decision Trees, Knowledge Discovery, Leukemia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221532 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity
Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Mujeeb Ur Rehman, Saifur Rahman Sabuj
Abstract:
This paper examines relationships between solar activity and earthquakes, it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity, and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to effect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth
.Keywords: K-Nearest Neighbour, Support Vector Regression, Random Forest Regression, Long Short-Term Memory Network, earthquakes, solar activity, sunspot number, solar wind, solar flares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20231 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems
Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese
Abstract:
This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.
Keywords: (n, k)-star Topology, Fault Tolerance, Conditional Diagnosability, Multi-Agent System, Automated Power System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 245030 Cheiloscopy and Dactylography in Relation to ABO Blood Groups: Egyptian vs. Malay Populations
Authors: Manal Hassan Abdel Aziz, Fatma Mohamed Magdy Badr El Dine, Nourhan Mohamed Mohamed Saeed
Abstract:
Establishing association between lip print patterns and those of fingerprints as well as blood groups is of fundamental importance in the forensic identification domain. The first aim of the current study was to determine the prevalent types of ABO blood groups, lip prints and fingerprints patterns in both studied populations. Secondly, to analyze any relation found between the different print patterns and the blood groups, which would be valuable in identification purposes. The present study was conducted on 60 healthy volunteers, (30 males and 30 females) from each of the studied population. Lip prints and fingerprints were obtained and classified according to Tsuchihashi's classification and Michael Kuchen’s classification, respectively. The results show that the ulnar loop was the most frequent among both populations. Blood group A was the most frequent among Egyptians, while blood groups O and B were the predominant among Malaysians. Significant relations were observed between lip print patterns and fingerprint (in the second quadrant for Egyptian males and the first one for Malaysian). For Malaysian females, a statistically significant association was proved in the fourth quadrant. Regarding the blood groups, 89.5% of ulnar loops were significantly related to blood group A among Egyptian males. The results proved an association between the fingerprint pattern and the lip prints, as well as between the ABO blood group and the pattern of fingerprints. However, further researches with larger sample sizes need to be directed to approve the current results.Keywords: ABO, cheiloscopy, dactylography, Egyptians, Malaysians.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88429 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems
Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu
Abstract:
Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.
Keywords: Agent communication, introspective agent, isolation of agent, policy enforcement system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64428 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: Natural language processing, end user development; natural language interfaces, human computer interaction, data recognition, dialog systems, spreadsheet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112227 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria
Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola
Abstract:
Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period between 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.
Keywords: GIS, Modeling, Sensitivity Analysis, SWAT, Water Yield, Watershed level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503826 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.
Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191025 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications
Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber
Abstract:
Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Keywords: Classification, High dimensional data, Machine learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 238424 Acid Attack on Cement Mortars Modified with Rubber Aggregates and EVA Polymer Binder
Authors: Konstantinos Sotiriadis, Michael Tupý, Nikol Žižková, Vít Petránek
Abstract:
The acid attack on cement mortars modified with rubber aggregates and EVA polymer binder was studied. Mortar specimens were prepared using a type CEM I 42.5 Portland cement and siliceous sand, as well as by substituting 25% of sand with shredded used automobile tires, and by adding EVA polymer in two percentages (5% and 10% of cement mass). Some specimens were only air cured, at laboratory conditions, and their compressive strength and water absorption were determined. The rest specimens were stored in acid solutions (HCl, H2SO4, HNO3) after 28 days of initial curing, and stored at laboratory temperature. Compressive strength tests, mass measurements and visual inspection took place for 28 days. Compressive strength and water absorption of the air-cured specimens were significantly decreased when rubber aggregates are used. The addition of EVA polymer further reduced water absorption, while had no important impact on strength. Compressive strength values were affected in a greater extent by hydrochloric acid solution, followed by sulfate and nitric acid solutions. The addition of EVA polymer decreased compressive strength loss for the specimens with rubber aggregates stored in hydrochloric and nitric acid solutions. The specimens without polymer binder showed similar mass loss, which was higher in sulfate acid solution followed by hydrochloric and nitric acid solutions. The use of EVA polymer delayed mass loss, while its content did not affect it significantly.
Keywords: Acid attack, mortar, EVA polymer, rubber aggregates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215723 On the Design of Shape Memory Alloy Locking Mechanism: A Novel Solution for Laparoscopic Ligation Process
Authors: Reza Yousefian, Michael A. Kia, Mehrdad Hosseini Zadeh
Abstract:
The blood ducts must be occluded to avoid loss of blood from vessels in laparoscopic surgeries. This paper presents a locking mechanism to be used in a ligation laparoscopic procedure (LigLAP I), as an alternative solution for a stapling procedure. Currently, stapling devices are being used to occlude vessels. Using these devices may result in some problems, including injury of bile duct, taking up a great deal of space behind the vessel, and bile leak. In this new procedure, a two-layer suture occludes a vessel. A locking mechanism is also required to hold the suture. Since there is a limited space at the device tip, a Shape Memory Alloy (SMA) actuator is used in this mechanism. Suitability for cleanroom applications, small size, and silent performance are among the advantages of SMA actuators in biomedical applications. An experimental study is conducted to examine the function of the locking mechanism. To set up the experiment, a prototype of a locking mechanism is built using nitinol, which is a nickel-titanium shape memory alloy. The locking mechanism successfully locks a polymer suture for all runs of the experiment. In addition, the effects of various surface materials on the applied pulling forces are studied. Various materials are mounted at the mechanism tip to compare the maximum pulling forces applied to the suture for each material. The results show that the various surface materials on the device tip provide large differences in the applied pulling forces.Keywords: Laparoscopic surgery, ligation process, locking mechanism, Shape Memory Alloy (SMA) actuator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 243622 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394921 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 118020 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements
Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating
Abstract:
Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.Keywords: Dynamic area requirements, facility layout problem, optimization model, product assembly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104919 Modeling of Surface Roughness for Flow over a Complex Vegetated Surface
Authors: Wichai Pattanapol, Sarah J. Wakes, Michael J. Hilton, Katharine J.M. Dickinson
Abstract:
Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.
Keywords: CFD, canopy flow, surface roughness, turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 296318 Opportunities for Precision Feed in Apiculture for Managing the Efficacy of Feed and Medicine
Authors: John Michael Russo
Abstract:
Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.
Keywords: Apiculture, precision apiculture, RNA varroa treatment, honeybee feed applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23417 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction
Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz
Abstract:
This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190616 Effect of Cladding Direction on Residual Stress Distribution in Laser Cladded Rails
Authors: Taposh Roy, Anna Paradowska, Ralph Abrahams, Quan Lai, Michael Law, Peter Mutton, Mehdi Soodi, Wenyi Yan
Abstract:
In this investigation, a laser cladding process with a powder feeding was used to deposit stainless steel 410L (high strength, excellent resistance to abrasion and corrosion, and great laser compatibility) onto railhead (higher strength, heat treated hypereutectoid rail grade manufactured in accordance with the requirements of European standard EN 13674 Part 1 for R400HT grade), to investigate the development and controllability of process-induced residual stress in the cladding, heat-affected zone (HAZ) and substrate and to analyse their correlation with hardness profile during two different laser cladding directions (across and along the track). Residual stresses were analysed by neutron diffraction at OPAL reactor, ANSTO. Neutron diffraction was carried out on the samples in longitudinal (parallel to the rail), transverse (perpendicular to the rail) and normal (through thickness) directions with high spatial resolution through the thickness. Due to the thick rail and thin cladding, 4 mm thick reference samples were prepared from every specimen by Electric Discharge Machining (EDM). Metallography across the laser claded sample revealed four distinct zones: The clad zone, the dilution zone, HAZ and the substrate. Compressive residual stresses were found in the clad zone and tensile residual stress in the dilution zone and HAZ. Laser cladding in longitudinally cladding induced higher tensile stress in the HAZ, whereas transversely cladding rail showed lower tensile behavior.
Keywords: Laser cladding, residual stress, neutron diffraction, HAZ.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101215 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control
Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon
Abstract:
Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.Keywords: Battery Energy Storage System, electrical network frequency stability, frequency control unit, PowerFactory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80414 A National Survey of Clinical Psychology Graduate Student Attitudes toward Psychotherapy Treatment Manuals: A Replication Study
Authors: B. Bergström, A. Ladd, A. Jones, L. Rosso, P. Michael
Abstract:
Attitudes toward treatment manuals serve as a meaningful predictor of general attitudes toward evidence-based practice. Despite demonstrating high effectiveness in treating many mental disorders, manualized treatments have been underutilized by practitioners. Thus, one can assess the state of the field regarding the adoption of evidence-based practices by surveying practitioner attitudes towards manualized treatments. This study is an adapted replication that assesses psychology graduate student attitudes towards manualized treatments, as a general marker for attitudes towards evidence-based practice. Training programs provide future clinicians with the foundation for critical skills in clinical practice. Research demonstrates that post-graduate continuing education has little to no effect on clinical practice; thus, graduate programs serve as the primary, and often final platform for all future practice. However, there are little empirical data identifying the attitudes and training of graduate students in utilizing manualized treatments. The empirical analysis of this study indicates an increase in positive attitudes among graduate student attitudes towards manualized treatments (within the United States), when compared to past surveys of professional psychologists. Findings from this study may inform graduate programs of barriers for students in developing positive attitudes toward manualized treatments and evidence-based practice. This study also serves as a preliminary predictor of the state-of-the field, in regards to professional psychologists attitudes towards evidence-based practice, if attitudes remain stable. This study indicates that the attitudes toward utilizing evidence-based practices, such as treatment manuals, has become more positive since year 2000.
Keywords: Evidence based treatment, Future of clinical science, Manualized treatment, Student attitudes towards evidence based treatments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 82813 The Underestimation of Cultural Risk in the Execution of Megaprojects
Authors: Alan Walsh, Peter Walker, Michael Ellis
Abstract:
There is a real danger that both practitioners and researchers considering risks associated with megaprojects ignore or underestimate the impacts of cultural risk. The paper investigates the potential impacts of a failure to achieve cultural unity between the principal actors executing a megaproject. The principle relationships include the relationships between the principle Contractors and the project stakeholders or the project stakeholders and their principle advisors, Western Consultants. This study confirms that cultural dissonance between these parties can delay or disrupt the megaproject execution and examines why cultural issues should be prioritized as a significant risk factor in megaproject delivery. This paper addresses the practical impacts and potential mitigation measures, which may reduce cultural dissonance for a megaproject's delivery. This information is retrieved from on-going case studies in live infrastructure megaprojects in Europe and the Middle East's GCC states, from Western Consultants' perspective. The collaborating researchers each have at least 30 years of construction experience and are engaged in architecture, project management and contracts management, dealing with megaprojects in Europe or the GCC. After examining the cultural interfaces they have observed during the execution of megaprojects, they conclude that globally, culture significantly influences their efficient delivery. The study finds that cultural risk is ever-present, where different nationalities co-manage megaprojects and that cultural conflict poses a real threat to the timely delivery of megaprojects. The study indicates that the higher the cultural distance between the principal actors, the more pronounced the risk, with the risk of cultural dissonance more prominent in GCC megaprojects. The findings support a more culturally aware and cohesive team approach and recommend cross-cultural training to mitigate the effects of cultural disparity.
Keywords: Cultural risk underestimation, cultural distance, megaproject characteristics, megaproject execution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59912 Hydraulic Optimization of an Adjustable Spiral-Shaped Evaporator
Authors: Matthias Feiner, Francisco Javier Fernández García, Michael Arneman, Martin Kipfmüller
Abstract:
To ensure reliability in miniaturized devices or processes with increased heat fluxes, very efficient cooling methods have to be employed in order to cope with small available cooling surfaces. To address this problem, a certain type of evaporator/heat exchanger was developed: It is called a swirl evaporator due to its flow characteristic. The swirl evaporator consists of a concentrically eroded screw geometry in which a capillary tube is guided, which is inserted into a pocket hole in components with high heat load. The liquid refrigerant R32 is sprayed through the capillary tube to the end face of the blind hole and is sucked off against the injection direction in the screw geometry. Its inner diameter is between one and three millimeters. The refrigerant is sprayed into the pocket hole via a small tube aligned in the center of the bore hole and is sucked off on the front side of the hole against the direction of injection. The refrigerant is sucked off in a helical geometry (twisted flow) so that it is accelerated against the hot wall (centrifugal acceleration). This results in an increase in the critical heat flux of up to 40%. In this way, more heat can be dissipated on the same surface/available installation space. This enables a wide range of technical applications. To optimize the design for the needs in various fields of industry, like the internal tool cooling when machining nickel base alloys like Inconel 718, a correlation-based model of the swirl-evaporator was developed. The model is separated into 3 subgroups with overall 5 regimes. The pressure drop and heat transfer are calculated separately. An approach to determine the locality of phase change in the capillary and the swirl was implemented. A test stand has been developed to verify the simulation.
Keywords: Helically-shaped, oil-free, R32, swirl-evaporator, twist flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47211 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158410 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8109 Rigorous Electromagnetic Model of Fourier Transform Infrared (FT-IR) Spectroscopic Imaging Applied to Automated Histology of Prostate Tissue Specimens
Authors: Rohith K Reddy, David Mayerich, Michael Walsh, P Scott Carney, Rohit Bhargava
Abstract:
Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that provides both chemically and spatially resolved information. The rich chemical content of data may be utilized for computer-aided determinations of structure and pathologic state (cancer diagnosis) in histological tissue sections for prostate cancer. FT-IR spectroscopic imaging of prostate tissue has shown that tissue type (histological) classification can be performed to a high degree of accuracy [1] and cancer diagnosis can be performed with an accuracy of about 80% [2] on a microscopic (≈ 6μm) length scale. In performing these analyses, it has been observed that there is large variability (more than 60%) between spectra from different points on tissue that is expected to consist of the same essential chemical constituents. Spectra at the edges of tissues are characteristically and consistently different from chemically similar tissue in the middle of the same sample. Here, we explain these differences using a rigorous electromagnetic model for light-sample interaction. Spectra from FT-IR spectroscopic imaging of chemically heterogeneous samples are different from bulk spectra of individual chemical constituents of the sample. This is because spectra not only depend on chemistry, but also on the shape of the sample. Using coupled wave analysis, we characterize and quantify the nature of spectral distortions at the edges of tissues. Furthermore, we present a method of performing histological classification of tissue samples. Since the mid-infrared spectrum is typically assumed to be a quantitative measure of chemical composition, classification results can vary widely due to spectral distortions. However, we demonstrate that the selection of localized metrics based on chemical information can make our data robust to the spectral distortions caused by scattering at the tissue boundary.Keywords: Infrared, Spectroscopy, Imaging, Tissue classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16348 A Convolutional Neural Network-Based Vehicle Theft Detection, Location, and Reporting System
Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala
Abstract:
One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets, especially in the motorist sector, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of Python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. 60 vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes that the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.
Keywords: Convolutional Neural Network, CNN, location identification, tracking, GPS, GSM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 415