Search results for: Computational Fluid Dynamics
3876 Spatiotemporal Propagation and Pattern of Epileptic Spike Predict Seizure Onset Zone
Authors: Mostafa Mohammadpour, Christoph Kapeller, Christy Li, Josef Scharinger, Christoph Guger
Abstract:
Interictal spikes provide valuable information on electrocorticography (ECoG), which aids in surgical planning for patients who suffer from refractory epilepsy. However, the shape and temporal dynamics of these spikes remain unclear. The purpose of this work was to analyze the shape of interictal spikes and measure their distance to the seizure onset zone (SOZ) to use in epilepsy surgery. Thirteen patients' data from the iEEG portal were retrospectively studied. For analysis, half an hour of ECoG data was used from each patient, with the data being truncated before the onset of a seizure. Spikes were first detected and grouped in a sequence, then clustered into interictal epileptiform discharges (IEDs) and non-IED groups using two-step clustering. The distance of the spikes from IED and non-IED groups to SOZ was quantified and compared using the Wilcoxon rank-sum test. Spikes in the IED group tended to be in SOZ or close to it, while spikes in the non-IED group were in distance of SOZ or non-SOZ area. At the group level, the distribution for sharp wave, positive baseline shift, slow wave, and slow wave to sharp wave ratio was significantly different for IED and non-IED groups. The distance of the IED cluster was 10.00mm and significantly closer to the SOZ than the 17.65mm for non-IEDs. These findings provide insights into the shape and spatiotemporal dynamics of spikes that could influence the network mechanisms underlying refractory epilepsy.Keywords: spike propagation, spike pattern, clustering, SOZ
Procedia PDF Downloads 753875 ΕSW01: A Methodology for Approaching the Design of Interior Spaces
Authors: Eirini Krasaki
Abstract:
This paper addresses the problem of designing spaces in a consistently changing environment. Space is considered as a totality of forces that coexist in the same place. Forces form the identity of space and characterize the entities that coexist within the same totality. Interior space is considered as a totality of forces which develop within an envelope. This research focuses on the formation of the tripole space-forces-totality and studies the relation of this tripole to the interior space. The point of departure for this investigation has been set the historic center of Athens, a city center where the majority of building mass is unused. The objective of the study is to connect the development of interior spaces to the alterations of the conceptions that form the built environment. The research focuses on Evripidou street, an axis around which expand both commercial and residential centers. Along Evripidou street, three case studies elaborate: a) In case study 01, Evripidou street is examined as a megastructure in which totalities of interior spaces develop. b) In case study 02, a particular group of entities (polykatoikia) that expand in Evripidou street is investigated. c) In case study 03, a particular group of entities (apartment) that derives from a specific envelope is investigated. Throughout the studies and comparisons of different scales, a design methodology that addresses the design of interior space in relation to the dynamics of the built environment is evolved.Keywords: methodology, research by design, interior, envelope, dynamics
Procedia PDF Downloads 1813874 Water Diffusivity in Amorphous Epoxy Resins: An Autonomous Basin Climbing-Based Simulation Method
Authors: Betim Bahtiri, B. Arash, R. Rolfes
Abstract:
Epoxy-based materials are frequently exposed to high-humidity environments in many engineering applications. As a result, their material properties would be degraded by water absorption. A full characterization of the material properties under hygrothermal conditions requires time- and cost-consuming experimental tests. To gain insights into the physics of diffusion mechanisms, atomistic simulations have been shown to be effective tools. Concerning the diffusion of water in polymers, spatial trajectories of water molecules are obtained from molecular dynamics (MD) simulations allowing the interpretation of diffusion pathways at the nanoscale in a polymer network. Conventional MD simulations of water diffusion in amorphous polymers lead to discrepancies at low temperatures due to the short timescales of the simulations. In the proposed model, this issue is solved by using a combined scheme of autonomous basin climbing (ABC) with kinetic Monte Carlo and reactive MD simulations to investigate the diffusivity of water molecules in epoxy resins across a wide range of temperatures. It is shown that the proposed simulation framework estimates kinetic properties of water diffusion in epoxy resins that are consistent with experimental observations and provide a predictive tool for investigating the diffusion of small molecules in other amorphous polymers.Keywords: epoxy resins, water diffusion, autonomous basin climbing, kinetic Monte Carlo, reactive molecular dynamics
Procedia PDF Downloads 713873 Reactivity of Clay Minerals of the Hydrocarbon Reservoir Rocks and the Effect of Zeolites on Operation and Production Costs That the Oil Industry in the World Assumes
Authors: Carlos Alberto Ríos Reyes
Abstract:
Traditionally, clays have been considered as one of the main problems in the flow of fluids in hydrocarbon reservoirs. However, there is not known the significance of zeolites formed from the reactivity of clays and their effect not only on the costs of operations carried out by the oil industry in the world but also on production. The present work focused on understanding the interaction between clay minerals with brines and alkaline solutions used in the oil industry. For this, a comparative study was conducted where the reaction of sedimentary rocks under laboratory conditions was examined. Original and treated rocks were examined by X-ray powder diffraction (XRPD) and Scanning Electron Microscopy (SEM) to determine the changes that these rocks underwent upon contact with fluids of variable chemical composition. As a result, zeolite Linde Type A (LTA), sodalite (SOD), and cancrinite (CAN) can be formed after experimental work, which coincided with the dissolution of kaolinite and smectite. Results reveal that the Oil Industry should invest efforts and focus its gaze to understand at the pore scale the problem that could arise as a consequence of the clay-fluid interaction in hydrocarbon reservoir rocks due to the presence of clays in their porous system, as well as the formation of zeolites, which are better hydrocarbon absorbents. These issues could be generating losses in world production. We conclude that there is a critical situation that may be occurring in the stimulation of hydrocarbon reservoirs, where real solutions are necessary not only for the formulation of more efficient and effective injection fluids but also to contribute to the improvement of production and avoid considerable losses in operating costs.Keywords: clay minerals, zeolites, rock-fluid interaction, experimental work, reactivity
Procedia PDF Downloads 913872 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 3663871 Legal Allocation of Risks: A Computational Analysis of Force Majeure Clauses
Authors: Farshad Ghodoosi
Abstract:
This article analyzes the effect of supervening events in contracts. Contracts serve an important function: allocation of risks. In spite of its importance, the case law and the doctrine are messy and inconsistent. This article provides a fresh look at excuse doctrines (i.e., force majeure, impracticability, impossibility, and frustration) with a focus on force majeure clauses. The article makes the following contributions: First, it furnishes a new conceptual and theoretical framework of excuse doctrines. By distilling the decisions, it shows that excuse doctrines rests on the triangle of control, foreseeability, and contract language. Second, it analyzes force majeure clauses used by S&P 500 companies to understand the stickiness and similarity of such clauses and the events they cover. Third, using computational and statistical tools, it analyzes US cases since 1810 in order to assess the weight given to the triangle of control, foreseeability, and contract language. It shows that the control factor plays an important role in force majeure analysis, while the contractual interpretation is the least important factor. The Article concludes that it is the standard for control -whether the supervening event is beyond the control of the party- that determines the outcome of cases in the force majeure context and not necessarily the contractual language. This article has important implications on COVID-19-related contractual cases. Unlike the prevailing narrative that it is the language of the force majeure clause that’s determinative, this article shows that the primarily focus of the inquiry will be on whether the effects of COVID-19 have been beyond the control of the promisee. Normatively, the Article suggests that the trifactor of control, foreseeability, and contractual language are not effective for allocation of legal risks in times of crises. It puts forward a novel approach to force majeure clauses whereby that the courts should instead focus on the degree to which parties have relied on (expected) performance, in particular during the time of crisis.Keywords: contractual risks, force majeure clauses, foreseeability, control, contractual language, computational analysis
Procedia PDF Downloads 1553870 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability
Procedia PDF Downloads 2833869 Estimation of Service Quality and Its Impact on Market Share Using Business Analytics
Authors: Haritha Saranga
Abstract:
Service quality has become an important driver of competition in manufacturing industries of late, as many products are being sold in conjunction with service offerings. With increase in computational power and data capture capabilities, it has become possible to analyze and estimate various aspects of service quality at the granular level and determine their impact on business performance. In the current study context, dealer level, model-wise warranty data from one of the top two-wheeler manufacturers in India is used to estimate service quality of individual dealers and its impact on warranty related costs and sales performance. We collected primary data on warranty costs, number of complaints, monthly sales, type of quality upgrades, etc. from the two-wheeler automaker. In addition, we gathered secondary data on various regions in India, such as petrol and diesel prices, geographic and climatic conditions of various regions where the dealers are located, to control for customer usage patterns. We analyze this primary and secondary data with the help of a variety of analytics tools such as Auto-Regressive Integrated Moving Average (ARIMA), Seasonal ARIMA and ARIMAX. Study results, after controlling for a variety of factors, such as size, age, region of the dealership, and customer usage pattern, show that service quality does influence sales of the products in a significant manner. A more nuanced analysis reveals the dynamics between product quality and service quality, and how their interaction affects sales performance in the Indian two-wheeler industry context. We also provide various managerial insights using descriptive analytics and build a model that can provide sales projections using a variety of forecasting techniques.Keywords: service quality, product quality, automobile industry, business analytics, auto-regressive integrated moving average
Procedia PDF Downloads 1223868 Impact of America's Anti-Ballistic Missile System (ABMS) on Power Dynamics of the World
Authors: Fehmeen Anwar, Ujala Liaqat
Abstract:
For over half a century, U.S. and the Soviet Union have been at daggers drawn with each other. Both leading powers of the world have been struggling hard to surpass each other in military and other technological fields. This neck-to-neck competition turned in favour of U.S. in the early 1990s when USSR had to face economic stagnation and later dismemberment of several of its states. The predominance of U.S. is still evident to date, rather it continues to grow. With this proposed defence program i.e. Anti-Ballistic Missile System, the U.S. will have a considerable chance of intercepting any nuclear strike by Russia, which re-asserts U.S. dominance in the region and creating a security dilemma for Russia and other states. The question is whether America’s recent nuclear deterrence project is merely to counter nuclear threats from Iran and North Korea or is it purely directed towards Russia, thus ensuring complete military supremacy in the world. Although U.S professes to direct its Anti-Ballistic Missile System (ABMS) against the axis of evil (Iran and North Korea), yet the deployment of this system in the East European territory undermines the Russian nuclear strategic capability, as this enables U.S. to initiate an attack and guard itself from retaliatory strike, thus disturbing the security equilibrium in Europe. The implications of this program can lead to power imbalance which can lead to the emergence of fundamentally different paradigm of international politics.Keywords: Anti-Ballistic Missile System (ABMS), cold-war, axis of evil, power dynamics
Procedia PDF Downloads 2973867 Closed-Loop Supply Chain: A Study of Bullwhip Effect Using Simulation
Authors: Siddhartha Paul, Debabrata Das
Abstract:
Closed-loop supply chain (CLSC) management focuses on integrating forward and reverse flow of material as well as information to maximize value creation over the entire life-cycle of a product. Bullwhip effect in supply chain management refers to the phenomenon where a small variation in customers’ demand results in larger variation of orders at the upstream levels of supply chain. Since the quality and quantity of products returned to the collection centers (as a part of reverse logistics process) are uncertain, bullwhip effect is inevitable in CLSC. Therefore, in the present study, first, through an extensive literature survey, we identify all the important factors related to forward as well as reverse supply chain which causes bullwhip effect in CLSC. Second, we develop a system dynamics model to study the interrelationship among the factors and their effect on the performance of overall CLSC. Finally, the results of the simulation study suggest that demand forecasting, lead times, information sharing, inventory and work in progress adjustment rate, supply shortages, batch ordering, price variations, erratic human behavior, parameter correcting, delivery time delays, return rate of used products, manufacturing and remanufacturing capacity constraints are the important factors which have a significant influence on system’s performance, specifically on bullwhip effect in a CLSC.Keywords: bullwhip effect, closed-loop supply chain, system dynamics, variance ratio
Procedia PDF Downloads 1653866 System-level Factors, Presidential Coattails and Mass Preferences: Dynamics of Party Nationalization in Contemporary Brazil (1990-2014)
Authors: Kazuma Mizukoshi
Abstract:
Are electoral politics in contemporary Brazil still local in organization and focus? The importance of this question lies in its paradoxical trajectories. First, often coupled with institutional and sociological ‘barriers’ (e.g. the selection and election of candidates relatively loyal to the local party leadership, the predominance of territorialized electoral campaigns, and the resilience of political clientelism), the regionalization of electoral politics has been a viable and practical solution especially for pragmatic politicians in some Latin American countries. On the other hand, some leftist parties that once served as minor opposition forces at the time of foundational or initial elections have certainly expanded vote shares. Some were eventually capable of holding most (if not a majority) legislative seats since the 1990s. Though not yet rigorously demonstrated, theoretically implicit in the rise of leftist parties in legislative elections is the gradual (if not complete) nationalization of electoral support—meaning the growing equality of a party’s vote share across electoral districts and its change over time. This study will develop four hypotheses to explain the dynamics of party nationalization in contemporary Brazil: district magnitude, ethnic and class fractionalization of each district, voting intentions in federal and state executive elections, and finally the left-right stances of electorates. The study will demonstrate these hypotheses by closely working with the Brazilian Electoral Study (2002-2014).Keywords: party nationalization, presidential coattails, Left, Brazil
Procedia PDF Downloads 1403865 Orientia Tsutsugamushi an Emerging Etiology of Acute Encephalitis Syndrome in Northern Part of India
Authors: Amita Jain, Shantanu Prakash, Suruchi Shukla
Abstract:
Introduction: Acute encephalitis syndrome (AES) is a complex multi etiology syndrome posing a great public health problem in the northern part of India. Japanese encephalitis (JE) virus is an established etiology of AES in this region. Recently, Scrub typhus (ST) is being recognized as an emerging aetiology of AES in JE endemic belt. This study was conducted to establish the direct evidence of Central nervous system invasion by Orientia tsutsugamushi leading to AES. Methodology: A total of 849 cases with clinical diagnosis of AES were enrolled from six districts (Deoria and its adjoining area) of the traditional north Indian Japanese encephalitis (JE) belt. Serum and Cerebrospinal fluid samples were collected and tested for major agent causing acute encephalitis. AES cases either positive for anti-ST IgM antibodies or negative for all tested etiologies were investigated for ST-DNA by real-time PCR. Results: Of these 505 cases, 250 patients were laboratory confirmed for O. tsutsugamushi infection either by anti-ST IgM antibodies positivity (n=206) on serum sample or by ST-DNA detection by real-time PCR assay on CSF sample (n=2) or by both (n=42).Total 29 isolate could be sequenced for 56KDa gene. Conclusion: All the strains were found to cluster with Gilliam strains. The majority of the isolates showed a 97–99% sequence similarity with Thailand and Cambodian strains. Gilliam strain of O.tsusugamushi is an emerging as one of the major aetiologies leading to AES in northern part of India.Keywords: acute encephalitis syndrome, O. tsutsugamushi, Gilliam strain, North India, cerebrospinal fluid
Procedia PDF Downloads 2543864 Optimization of Shear Frame Structures Applying Various Forms of Wavelet Transforms
Authors: Seyed Sadegh Naseralavi, Sohrab Nemati, Ehsan Khojastehfar, Sadegh Balaghi
Abstract:
In the present research, various formulations of wavelet transform are applied on acceleration time history of earthquake. The mentioned transforms decompose the strong ground motion into low and high frequency parts. Since the high frequency portion of strong ground motion has a minor effect on dynamic response of structures, the structure is excited by low frequency part. Consequently, the seismic response of structure is predicted consuming one half of computational time, comparing with conventional time history analysis. Towards reducing the computational effort needed in seismic optimization of structure, seismic optimization of a shear frame structure is conducted by applying various forms of mentioned transformation through genetic algorithm.
Keywords: time history analysis, wavelet transform, optimization, earthquake
Procedia PDF Downloads 2373863 Nonlocal Beam Models for Free Vibration Analysis of Double-Walled Carbon Nanotubes with Various End Supports
Authors: Babak Safaei, Ahmad Ghanbari, Arash Rahmani
Abstract:
In the present study, the free vibration characteristics of double-walled carbon nanotubes (DWCNTs) are investigated. The small-scale effects are taken into account using the Eringen’s nonlocal elasticity theory. The nonlocal elasticity equations are implemented into the different classical beam theories namely as Euler-Bernoulli beam theory (EBT), Timoshenko beam theory (TBT), Reddy beam theory (RBT), and Levinson beam theory (LBT) to analyze the free vibrations of DWCNTs in which each wall of the nanotubes is considered as individual beam with van der Waals interaction forces. Generalized differential quadrature (GDQ) method is utilized to discretize the governing differential equations of each nonlocal beam model along with four commonly used boundary conditions. Then molecular dynamics (MD) simulation is performed for a series of armchair and zigzag DWCNTs with different aspect ratios and boundary conditions, the results of which are matched with those of nonlocal beam models to extract the appropriate values of the nonlocal parameter corresponding to each type of chirality, nonlocal beam model and boundary condition. It is found that the present nonlocal beam models with their proposed correct values of nonlocal parameter have good capability to predict the vibrational behavior of DWCNTs, especially for higher aspect ratios.Keywords: double-walled carbon nanotubes, nonlocal continuum elasticity, free vibrations, molecular dynamics simulation, generalized differential quadrature method
Procedia PDF Downloads 3003862 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration
Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang
Abstract:
The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability
Procedia PDF Downloads 3483861 Effects of Screen Time on Children from a Systems Engineering Perspective
Authors: Misagh Faezipour
Abstract:
This paper explores the effects of screen time on children from a systems engineering perspective. We reviewed literature from several related works on the effects of screen time on children to explore all factors and interrelationships that would impact children that are subjected to using long screen times. Factors such as kids' age, parent attitudes, parent screen time influence, amount of time kids spend with technology, psychosocial and physical health outcomes, reduced mental imagery, problem-solving and adaptive thinking skills, obesity, unhealthy diet, depressive symptoms, health problems, disruption in sleep behavior, decrease in physical activities, problematic relationship with mothers, language, social, emotional delays, are examples of some factors that could be either a cause or effect of screen time. A systems engineering perspective is used to explore all the factors and factor relationships that were discovered through literature. A causal model is used to illustrate a graphical representation of these factors and their relationships. Through the causal model, the factors with the highest impacts can be realized. Future work would be to develop a system dynamics model to view the dynamic behavior of the relationships and observe the impact of changes in different factors in the model. The different changes on the input of the model, such as a healthier diet or obesity rate, would depict the effect of the screen time in the model and portray the effect on the children’s health and other factors that are important, which also works as a decision support tool.Keywords: children, causal model, screen time, systems engineering, system dynamics
Procedia PDF Downloads 1483860 Artificial Reproduction System and Imbalanced Dataset: A Mendelian Classification
Authors: Anita Kushwaha
Abstract:
We propose a new evolutionary computational model called Artificial Reproduction System which is based on the complex process of meiotic reproduction occurring between male and female cells of the living organisms. Artificial Reproduction System is an attempt towards a new computational intelligence approach inspired by the theoretical reproduction mechanism, observed reproduction functions, principles and mechanisms. A reproductive organism is programmed by genes and can be viewed as an automaton, mapping and reducing so as to create copies of those genes in its off springs. In Artificial Reproduction System, the binding mechanism between male and female cells is studied, parameters are chosen and a network is constructed also a feedback system for self regularization is established. The model then applies Mendel’s law of inheritance, allele-allele associations and can be used to perform data analysis of imbalanced data, multivariate, multiclass and big data. In the experimental study Artificial Reproduction System is compared with other state of the art classifiers like SVM, Radial Basis Function, neural networks, K-Nearest Neighbor for some benchmark datasets and comparison results indicates a good performance.Keywords: bio-inspired computation, nature- inspired computation, natural computing, data mining
Procedia PDF Downloads 2783859 General Purpose Graphic Processing Units Based Real Time Video Tracking System
Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai
Abstract:
Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.Keywords: connected components, embrace threads, local weighted kernel, structuring elements
Procedia PDF Downloads 4433858 Interactive Glare Visualization Model for an Architectural Space
Authors: Florina Dutt, Subhajit Das, Matthew Swartz
Abstract:
Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis
Procedia PDF Downloads 3533857 Development of Technologies for the Treatment of Nutritional Problems in Primary Care
Authors: Marta Fernández Batalla, José María Santamaría García, Maria Lourdes Jiménez Rodríguez, Roberto Barchino Plata, Adriana Cercas Duque, Enrique Monsalvo San Macario
Abstract:
Background: Primary Care Nursing is taking more autonomy in clinical decisions. One of the most frequent therapies to solve is related to the problems of maintaining a sufficient supply of food. Nursing diagnoses related to food are addressed by the nurse-family and community as the first responsible. Objectives and interventions are set according to each patient. To improve the goal setting and the treatment of these care problems, a technological tool is developed to help nurses. Objective: To evaluate the computational tool developed to support the clinical decision in feeding problems. Material and methods: A cross-sectional descriptive study was carried out at the Meco Health Center, Madrid, Spain. The study population consisted of four specialist nurses in primary care. These nurses tested the tool on 30 people with ‘need for nutritional therapy’. Subsequently, the usability of the tool and the satisfaction of the professional were sought. Results: A simple and convenient computational tool is designed for use. It has 3 main entrance fields: age, size, sex. The tool returns the following information: BMI (Body Mass Index) and caloric consumed by the person. The next step is the caloric calculation depending on the activity. It is possible to propose a goal of BMI or weight to achieve. With this, the amount of calories to be consumed is proposed. After using the tool, it was determined that the tool calculated the BMI and calories correctly (in 100% of clinical cases). satisfaction on nutritional assessment was ‘satisfactory’ or ‘very satisfactory’, linked to the speed of operations. As a point of improvement, the options of ‘stress factor’ linked to weekly physical activity. Conclusion: Based on the results, it is clear that the computational tools of decision support are useful in the clinic. Nurses are not only consumers of computational tools, but can develop their own tools. These technological solutions improve the effectiveness of nutrition assessment and intervention. We are currently working on improvements such as the calculation of protein percentages as a function of protein percentages as a function of stress parameters.Keywords: feeding behavior health, nutrition therapy, primary care nursing, technology assessment
Procedia PDF Downloads 2303856 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements
Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang
Abstract:
Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure
Procedia PDF Downloads 1153855 Decoding WallStreetBets: The Impact of Daily Disagreements on Trading Volumes
Authors: F. Ghandehari, H. Lu, L. El-Jahel, D. Jayasuriya
Abstract:
Disagreement among investors is a fundamental aspect of financial markets, significantly influencing market dynamics. Measuring this disagreement has traditionally posed challenges, often relying on proxies like analyst forecast dispersion, which are limited by biases and infrequent updates. Recent movements in social media indicate that retail investors actively seek financial advice online and can influence the stock market. The evolution of the investing landscape, particularly the rise of social media as a hub for financial advice, provides an alternative avenue for real-time measurement of investor sentiment and disagreement. Platforms like Reddit offer rich, community-driven discussions that reflect genuine investor opinions. This research explores how social media empowers retail investors and the potential of leveraging textual analysis of social media content to capture daily fluctuations in investor disagreement. This study investigates the relationship between daily investor disagreement and trading volume, focusing on the role of social media platforms in shaping market dynamics, specifically using data from WallStreetBets (WSB) on Reddit. This paper uses data from 2020 to 2023 from WSB and analyses 4,896 firms with enough social media activity in WSB to define stock-day level disagreement measures. Consistent with traditional theories that disagreement induces trading volume, the results show significant evidence supporting this claim through different disagreement measures derived from WSB discussions.Keywords: disagreement, retail investor, social finance, social media
Procedia PDF Downloads 443854 On the Well-Posedness of Darcy–Forchheimer Power Model Equation
Authors: Johnson Audu, Faisal Fairag
Abstract:
In a bounded subset of R^d, d=2 or 3, we consider the Darcy-Forchheimer power model with the exponent 1 < m ≤ 2 for a single-phase strong-inertia fluid flow in a porous medium. Under necessary compatibility condition, and some mild regularity assumptions on the interior and the boundary data, we prove the existence and uniqueness of solution (u, p) in L^(m+1 ) (Ω)^d X (W^(1,(m+1)/m) (Ω)^d ⋂L_0^2 (Ω)^d) and its stability.Keywords: porous media, power law, strong inertia, nonlinear, monotone type
Procedia PDF Downloads 3233853 A Computational Study on Solvent Effects on the Keto-Enol Tautomeric Equilibrium of Dimedone and Acetylacetone 1,3- Dicabonyls
Authors: Imad Eddine Charif, Sidi Mohamed Mekelleche, Didier Villemin
Abstract:
The solvent effects on the keto-enol tautomeric equilibriums of acetylacetone and dimedone are theoretically investigated at the correlated Becke-3-parameter-Lee-Yang-Parr (B3LYP) and second-order Møller-Plesset (MP2) computational levels. The present study shows that the most stable keto tautomer of acetylacetone corresponds to the trans-diketo, E,Z form; while the most stable enol tautomer corresponds to the closed cis-enol,Z,Z form. The keto tautomer of dimedone prefers the trans diketo, E, E form; while the most stable enol tautomer corresponds to trans-enol form. The calculated free Gibbs enthalpies indicate that, in polar solvents, the keto-enol equilibrium of acetylacetone is shifted toward the keto tautomer; whereas the keto-enol equilibrium of dimedone is shifted towards the enol tautomer. The experimental trends of the change of equilibrium constants with respect to the change of solvent polarity are well reproduced by both B3LYP and MP2 calculations.Keywords: acetylacetone, dimedone, solvent effects, keto-enol equilibrium, theoretical calculations
Procedia PDF Downloads 4543852 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 1553851 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet
Authors: Bello Muhammad Dogon Kade
Abstract:
The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein
Procedia PDF Downloads 713850 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems
Authors: Sreejith Gopinath, Aspen Olmsted
Abstract:
This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference
Procedia PDF Downloads 1383849 Assessment of Tidal Current Energy Potential at LAMU and Mombasa in Kenya
Authors: Lucy Patricia Onundo, Wilfred Njoroge Mwema
Abstract:
The tidal power potential available for electricity generation from Mombasa and Lamu sites in Kenya will be examined. Several African countries in the Western Indian Ocean endure insufficiencies in the power sector, including both generation and distribution. One important step towards increasing energy security and availability is to intensify the use of renewable energy sources. The access to cost-efficient hydropower is low in Mombasa and Lamu hence Ocean energy will play an important role. Global-Level resource assessments and oceanographic literature and data have been compiled in an analysis between technology-specific requirements for ocean energy technologies (salinity, tide, tidal current, wave, Ocean thermal energy conversion, wind and solar) and the physical resources in Lamu and Mombasa. The potential for tide and tidal current power is more restricted but may be of interest at some locations. The theoretical maximum power produced over a tidal cycle is determined by the product of the forcing tide and the undisturbed volumetric flow-rate. The extraction of the maximum power reduces the flow-rate, but a significant portion of the maximum power can be extracted with little change to the tidal dynamics. Two-dimensional finite-element, numerical simulations designed and developed agree with the theory. Temporal variations in resource intensity, as well as the differences between small-scale and large-scale applications, are considered.Keywords: energy assessment, marine tidal power, renewable energy, tidal dynamics
Procedia PDF Downloads 5843848 Energy Metabolism and Mitochondrial Biogenesis in Muscles of Rats Subjected to Cold Water Immersion
Authors: Bosiacki Mateusz, Anna Lubkowska, Dariusz Chlubek, Irena Baranowska-Bosiacka
Abstract:
Exposure to cold temperatures can be considered a stressor that can lead to adaptive responses. The present study hypothesized the possibility of a positive effect of cold water exercise on mitochondrial biogenesis and muscle energy metabolism in aging rats. The purpose of this study was to evaluate the effects of cold water exercise on energy status, purine compounds, and mitochondrial biogenesis in the muscles of aging rats as indicators of the effects of cold water exercise and their usefulness in monitoring adaptive changes. The study was conducted on 64 aging rats of both sexes, 15 months old at the time of the experiment. The rats (male and female separately) were randomly assigned to the following study groups: control, sedentary animals; 5°C groups animals - training swimming in cold water at 5°C; 36°C groups - animals training swimming in water at thermal comfort temperature. The study was conducted with the approval of the Local Ethical Committee for Animal Experiments. The animals in the experiment were subjected to swimming training for 9 weeks. During the first week of the study, the duration of the first swimming training was 2 minutes (on the first day), increasing daily by 0.5 minutes up to 4 minutes on the fifth day of the first week. From the second to the eighth week, the swimming training was 4 minutes per day, five days a week. At the end of the study, forty-eight hours after the last swim training, the animals were dissected. In the skeletal muscle tissue of the thighs of the rats, we determined the concentrations of ATP, ADP, AMP, Ado (HPLC), PGC-1a protein expression (Western blot), PGC1A, Mfn1, Mfn2, Opa1, and Drp1 gene expression (qRT PCR). The study showed that swimming in water at a thermally comfortable temperature improved the energy metabolism of the aging rat muscles by increasing the metabolic rate (increase in ATP, ADP, TAN, AEC) and enhancing mitochondrial fusion (increase in mRNA expression of regulatory proteins Mfn1 and Mfn2). Cold water swimming improved muscle energy metabolism in aging rats by increasing the rate of muscle energy metabolism (increase in ATP, ADP, TAN, AEC concentrations) and enhancing mitochondrial biogenesis and dynamics (increase in the mRNA expression of proteins of fusion-regulating factors – Mfn1, Mfn2, and Opa1, and the factor regulating mitochondrial fission – Drp1). The concentration of high-energy compounds and the expression of proteins regulating mitochondrial dynamics in the muscle may be a useful indicator in monitoring adaptive changes occurring in aging muscles under the influence of exercise in cold water. It represents a short-term adaptation to changing environmental conditions and has a beneficial effect on maintaining the bioenergetic capacity of muscles in the long term. Conclusion: exercise in cold water can exert positive effects on energy metabolism, biogenesis and dynamics of mitochondria in aging rat muscles. Enhancement of mitochondrial dynamics under cold water exercise conditions can improve mitochondrial function and optimize the bioenergetic capacity of mitochondria in aging rat muscles.Keywords: cold water immersion, adaptive responses, muscle energy metabolism, aging
Procedia PDF Downloads 863847 Neck Thinning Dynamics of Janus Droplets under Multiphase Interface Coupling in Cross Junction Microchannels
Authors: Jiahe Ru, Yan Pang, Zhaomiao Liu
Abstract:
Necking processes of the Janus droplet generation in the cross-junction microchannels are experimentally and theoretically investigated. The two dispersed phases that are simultaneously shear by continuous phases are liquid paraffin wax and 100cs silicone oil, in which 80% glycerin aqueous solution is used as continuous phases. According to the variation of minimum neck width and thinning rate, the necking process is divided into two stages, including the two-dimensional extrusion and the three-dimensional extrusion. In the two-dimensional extrusion stage, the evolutions of the tip extension length for the two discrete phases begin with the same trend, and then the length of liquid paraffin is larger than silicone oil. The upper and lower neck interface profiles in Janus necking process are asymmetrical when the tip extension velocity of paraffin oil is greater than that of silicone oil. In the three-dimensional extrusion stage, the neck of the liquid paraffin lags behind that of the silicone oil because of the higher surface tension, and finally, the necking fracture position gradually synchronizes. When the Janus droplets pinch off, the interfacial tension becomes positive to drive the neck thinning. The interface coupling of the three phases can cause asymmetric necking of the neck interface, which affects the necking time and, ultimately, the droplet volume. This paper mainly investigates the thinning dynamics of the liquid-liquid interface in confined microchannels. The revealed results could help to enhance the physical understanding of the droplet generation phenomenon.Keywords: neck interface, interface coupling, janus droplets, multiphase flow
Procedia PDF Downloads 135