Search results for: battery grading algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4346

Search results for: battery grading algorithm

356 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 79
355 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 114
354 Different Types of Amyloidosis Revealed with Positive Cardiac Scintigraphy with Tc-99M DPD-SPECT

Authors: Ioannis Panagiotopoulos, Efstathios Kastritis, Anastasia Katinioti, Georgios Efthymiadis, Argyrios Doumas, Maria Koutelou

Abstract:

Introduction: Transthyretin amyloidosis (ATTR) is a rare but serious infiltrative disease. Myocardial scintigraphy with DPD has emerged as the most effective, non-invasive, highly sensitive, and highly specific diagnostic method for cardiac ATTR amyloidosis. However, there are cases in which additional laboratory investigations reveal AL amyloidosis or other diseases despite a positive DPD scintigraphy. We describe the experience from the Onassis Cardiac Surgery Center and the monitoring center for infiltrative myocardial diseases of the cardiology clinic at AHEPA. Materials and Methods: All patients with clinical suspicion of cardiac or extracardiac amyloidosis undergo a myocardial scintigraphy scan with Tc-99m DPD. In this way, over 500 patients have been examined. Further diagnostic approach based on clinical and imaging findings includes laboratory investigation and invasive techniques (e.g., biopsy). Results: Out of 76 patients in total with positive myocardial scintigraphy Grade 2 or 3 according to the Perugini scale, 8 were proven to suffer from AL Amyloidosis during the investigation of paraproteinemia. Among these patients, 3 showed Grade 3 uptake, while the rest were graded as Grade 2, or 2 to 3. Additionally, one patient presented diffuse and unusual radiopharmaceutical uptake in soft tissues throughout the body without cardiac involvement. These findings raised suspicions, leading to the analysis of κ and λ light chains in the serum, as well as immunostaining of proteins in the serum and urine of these specific patients. The final diagnosis was AL amyloidosis. Conclusion: The value of DPD scintigraphy in the diagnosis of cardiac amyloidosis from transthyretin is undisputed. However, positive myocardial scintigraphy with DPD should not automatically lead to the diagnosis of ATTR amyloidosis. Laboratory differentiation between ATTR and AL amyloidosis is crucial, as both prognosis and therapeutic strategy are dramatically altered. Laboratory exclusion of paraproteinemia is a necessary and essential step in the diagnostic algorithm of ATTR amyloidosis for all positive myocardial scintigraphy with diphosphonate tracers since >20% of patients with Grade 3 and 2 uptake may conceal AL amyloidosis.

Keywords: AL amyloidosis, amyloidosis, ATTR, myocardial scintigraphy, Tc-99m DPD

Procedia PDF Downloads 81
353 Analysis of NMDA Receptor 2B Subunit Gene (GRIN2B) mRNA Expression in the Peripheral Blood Mononuclear Cells of Alzheimer's Disease Patients

Authors: Ali̇ Bayram, Semih Dalkilic, Remzi Yigiter

Abstract:

N-methyl-D-aspartate (NMDA) receptor is a subtype of glutamate receptor and plays a pivotal role in learning, memory, neuronal plasticity, neurotoxicity and synaptic mechanisms. Animal experiments were suggested that glutamate-induced excitotoxic injuriy and NMDA receptor blockage lead to amnesia and other neurodegenerative diseases including Alzheimer’s disease (AD), Huntington’s disease, amyotrophic lateral sclerosis. Aim of this study is to investigate association between NMDA receptor coding gene GRIN2B expression level and Alzheimer disease. The study was approved by the local ethics committees, and it was conducted according to the principles of the Declaration of Helsinki and guidelines for the Good Clinical Practice. Peripheral blood was collected 50 patients who diagnosed AD and 49 healthy control individuals. Total RNA was isolated with RNeasy midi kit (Qiagen) according to manufacturer’s instructions. After checked RNA quality and quantity with spectrophotometer, GRIN2B expression levels were detected by quantitative real time PCR (QRT-PCR). Statistical analyses were performed, variance between two groups were compared with Mann Whitney U test in GraphpadInstat algorithm with 95 % confidence interval and p < 0.05. After statistical analyses, we have determined that GRIN2B expression levels were down regulated in AD patients group with respect to control group. But expression level of this gene in each group was showed high variability. İn this study, we have determined that NMDA receptor coding gene GRIN2B expression level was down regulated in AD patients when compared with healthy control individuals. According to our results, we have speculated that GRIN2B expression level was associated with AD. But it is necessary to validate these results with bigger sample size.

Keywords: Alzheimer’s disease, N-methyl-d-aspartate receptor, NR2B, GRIN2B, mRNA expression, RT-PCR

Procedia PDF Downloads 394
352 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves

Authors: Dmytro Zubov, Francesco Volponi

Abstract:

In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.

Keywords: heat wave, D-wave, forecast, Ising model, quantum computing

Procedia PDF Downloads 500
351 Improvement of the Geometric of Dental Bridge Framework through Automatic Program

Authors: Rong-Yang Lai, Jia-Yu Wu, Chih-Han Chang, Yung-Chung Chen

Abstract:

The dental bridge is one of the clinical methods of the treatment for missing teeth. The dental bridge is generally designed for two layers, containing the inner layer of the framework(zirconia) and the outer layer of the porcelain-fused to framework restorations. The design of a conventional bridge is generally based on the antagonist tooth profile so that the framework evenly indented by an equal thickness from outer contour. All-ceramic dental bridge made of zirconia have well demonstrated remarkable potential to withstand a higher physiological occlusal load in posterior region, but it was found that there is still the risk of all-ceramic bridge failure in five years. Thus, how to reduce the incidence of failure is still a problem to be solved. Therefore, the objective of this study is to develop mechanical designs for all-ceramic dental bridges framework by reducing the stress and enhancing fracture resistance under given loading conditions by finite element method. In this study, dental design software is used to design dental bridge based on tooth CT images. After building model, Bi-directional Evolutionary Structural Optimization (BESO) Method algorithm implemented in finite element software was employed to analyze results of finite element software and determine the distribution of the materials in dental bridge; BESO searches the optimum distribution of two different materials, namely porcelain and zirconia. According to the previous calculation of the stress value of each element, when the element stress value is higher than the threshold value, the element would be replaced by the framework material; besides, the difference of maximum stress peak value is less than 0.1%, calculation is complete. After completing the design of dental bridge, the stress distribution of the whole structure is changed. BESO reduces the peak values of principle stress of 10% in outer-layer porcelain and avoids producing tensile stress failure.

Keywords: dental bridge, finite element analysis, framework, automatic program

Procedia PDF Downloads 282
350 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration

Authors: In-Hwan Baek

Abstract:

Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.

Keywords: dose-dependent, modeling, pharmacokinetics, valsartan

Procedia PDF Downloads 297
349 Geometric Nonlinear Dynamic Analysis of Cylindrical Composite Sandwich Shells Subjected to Underwater Blast Load

Authors: Mustafa Taskin, Ozgur Demir, M. Mert Serveren

Abstract:

The precise study of the impact of underwater explosions on structures is of great importance in the design and engineering calculations of floating structures, especially those used for military purposes, as well as power generation facilities such as offshore platforms that can become a target in case of war. Considering that ship and submarine structures are mostly curved surfaces, it is extremely important and interesting to examine the destructive effects of underwater explosions on curvilinear surfaces. In this study, geometric nonlinear dynamic analysis of cylindrical composite sandwich shells subjected to instantaneous pressure load is performed. The instantaneous pressure load is defined as an underwater explosion and the effects of the liquid medium are taken into account. There are equations in the literature for pressure due to underwater explosions, but these equations have been obtained for flat plates. For this reason, the instantaneous pressure load equations are arranged to be suitable for curvilinear structures before proceeding with the analyses. Fluid-solid interaction is defined by using Taylor's Plate Theory. The lower and upper layers of the cylindrical composite sandwich shell are modeled as composite laminate and the middle layer consists of soft core. The geometric nonlinear dynamic equations of the shell are obtained by Hamilton's principle, taken into account the von Kàrmàn theory of large displacements. Then, time dependent geometric nonlinear equations of motion are solved with the help of generalized differential quadrature method (GDQM) and dynamic behavior of cylindrical composite sandwich shells exposed to underwater explosion is investigated. An algorithm that can work parametrically for the solution has been developed within the scope of the study.

Keywords: cylindrical composite sandwich shells, generalized differential quadrature method, geometric nonlinear dynamic analysis, underwater explosion

Procedia PDF Downloads 192
348 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas

Authors: Ibrahim Obeidat

Abstract:

Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.

Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay

Procedia PDF Downloads 282
347 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method

Authors: Laheeb M. Ibrahim, Ibrahim A. Salih

Abstract:

Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).

Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO

Procedia PDF Downloads 533
346 Effects of Cacao Agroforestry and Landscape Composition on Farm Biodiversity and Household Dietary Diversity

Authors: Marlene Yu Lilin Wätzold, Wisnu Harto Adiwijoyo, Meike Wollni

Abstract:

Land-use conversion from tropical forests to cash crop production in the form of monocultures has drastic consequences for biodiversity. Meanwhile, high dependence on cash crop production is often associated with a decrease in other food crop production, thereby affecting household dietary diversity. Additionally, deforestation rates have been found to reduce households’ dietary diversity, as forests often offer various food sources. Agroforestry systems are seen as a potential solution to improve local biodiversity as well as provide a range of provisioning ecosystem services, such as timber and other food crops. While a number of studies have analyzed the effects of agroforestry on biodiversity, as well as household livelihood indicators, little is understood between potential trade-offs or synergies between the two. This interdisciplinary study aims to fill this gap by assessing cacao agroforestry’s role in enhancing local bird diversity, as well as farm household dietary diversity. Additionally, we will take a landscape perspective and investigate in what ways the landscape composition, such as the proximity to forests and forest patches, are able to contribute to the local bird diversity, as well as households’ dietary diversity. Our study will take place in two agro-ecological zones in Ghana, based on household surveys of 500 cacao farm households. Using a subsample of 120 cacao plots, we will assess the degree of shade tree diversity and density using drone flights and a computer vision tree detection algorithm. Bird density and diversity will be assessed using sound recordings that will be kept in the cacao plots for 24 hours. Landscape compositions will be assessed via remote sensing images. The results of our study are of high importance as they will allow us to understand the effects of agroforestry and landscape composition in improving simultaneous ecosystem services.

Keywords: agroforestry, biodiversity, landscape composition, nutrition

Procedia PDF Downloads 113
345 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules

Authors: Michael Naderhirn, Marco Pavone

Abstract:

Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.

Keywords: formal system verification, reachability, real time controller, hybrid system

Procedia PDF Downloads 241
344 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 145
343 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 161
342 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 425
341 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 71
340 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 272
339 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms

Authors: Farhat Imtiaz, Umar Farooq

Abstract:

In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.

Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation

Procedia PDF Downloads 135
338 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 309
337 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function

Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina

Abstract:

The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.

Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic

Procedia PDF Downloads 248
336 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting

Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos

Abstract:

Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.

Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning

Procedia PDF Downloads 107
335 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.

Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA

Procedia PDF Downloads 152
334 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 103
333 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 81
332 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design

Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad

Abstract:

In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.

Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method

Procedia PDF Downloads 243
331 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 489
330 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array

Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.

Keywords: acoustic sensing, direction of arrival, drone detection, microphone array

Procedia PDF Downloads 160
329 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles

Authors: Dipak Sen, Rajdeep Ghosh

Abstract:

This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.

Keywords: friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio

Procedia PDF Downloads 372
328 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 342
327 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 80