Search results for: reliability function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6596

Search results for: reliability function

5396 Existence and Stability of Periodic Traveling Waves in a Bistable Excitable System

Authors: M. Osman Gani, M. Ferdows, Toshiyuki Ogawa

Abstract:

In this work, we proposed a modified FHN-type reaction-diffusion system for a bistable excitable system by adding a scaled function obtained from a given function. We study the existence and the stability of the periodic traveling waves (or wavetrains) for the FitzHugh-Nagumo (FHN) system and the modified one and compare the results. The stability results of the periodic traveling waves (PTWs) indicate that most of the solutions in the fast family of the PTWs are stable for the FitzHugh-Nagumo equations. The instability occurs only in the waves having smaller periods. However, the smaller period waves are always unstable. The fast family with sufficiently large periods is always stable in FHN model. We find that the oscillation of pulse widths is absent in the standard FHN model. That motivates us to study the PTWs in the proposed FHN-type reaction-diffusion system for the bistable excitable media. A good agreement is found between the solutions of the traveling wave ODEs and the corresponding whole PDE simulation.

Keywords: bistable system, Eckhaus bifurcation, excitable media, FitzHugh-Nagumo model, periodic traveling waves

Procedia PDF Downloads 174
5395 Evaluation of Public Library Adult Programs: Use of Servqual and Nippa Assessment Standards

Authors: Anna Ching-Yu Wong

Abstract:

This study aims to identify the quality and effectiveness of the adult programs provided by the public library using the ServQUAL Method and the National Library Public Programs Assessment guidelines (NIPPA, June 2019). ServQUAl covers several variables, namely: tangible, reliability, responsiveness, assurance, and empathy. NIPPA guidelines focus on program characteristics, particularly on the outcomes – the level of satisfaction from program participants. The reached populations were adults who participated in library adult programs at a small-town public library in Kansas. This study was designed as quantitative evaluative research which analyzed the quality and effectiveness of the library adult programs by analyzing the role of each factor based on ServQUAL and the NIPPA's library program assessment guidelines. Data were collected from November 2019 to January 2020 using a questionnaire with a Likert Scale. The data obtained were analyzed in a descriptive quantitative manner. The impact of this research can provide information about the quality and effectiveness of existing programs and can be used as input to develop strategies for developing future adult programs. Overall the result of ServQUAL measurement is in very good quality, but still, areas need improvement and emphasis in each variable: Tangible Variables still need improvement in indicators of the temperature and space of the meeting room. Reliability Variable still needs improvement in the timely delivery of the programs. Responsiveness Variable still needs improvement in terms of the ability of the presenters to convey trust and confidence from participants. Assurance Variables still need improvement in the indicator of knowledge and skills of program presenters. Empathy Variable still needs improvement in terms of the presenters' willingness to provide extra assistance. The result of program outcomes measurement based on NIPPA guidelines is very positive. Over 96% of participants indicated that the programs were informative and fun. They learned new knowledge and new skills and would recommend the programs to their friends and families. They believed that together, the library and participants build stronger and healthier communities.

Keywords: ServQual model, ServQual in public libraries, library program assessment, NIPPA library programs assessment

Procedia PDF Downloads 85
5394 Implementing Quality Function Deployment Tool for a Customer Driven New Product Development in a Kuwait SME

Authors: Asma AlQahtani, Jumana AlHadad, Maryam AlQallaf, Shoug AlHasan

Abstract:

New product development (NPD) is the complete process of bringing a new product to the customer by integrating the two broad divisions; one involving the idea generation, product design and detail engineering; and the other involving market research and marketing analysis. It is a common practice for companies to undertake some of these tasks simultaneously (concurrent engineering) and also consider them as an ongoing process (continuous development). The current study explores the framework and methodology for a new product development process utilizing the Quality Function Deployment (QFD) tool for bringing the customer opinion into the product development process. An elaborate customer survey with focus groups in the region was carried out to ensure that customer requirements are integrated into new products as early as the design stage including identifying the recognition of need for the new product. A QFD Matrix (House of Quality) was prepared that links customer requirements to product engineering requirements and a feasibility study and risk assessment exercise was carried out for a Small and Medium Enterprise (SME) in Kuwait for development of the new product. SMEs in Kuwait, particularly in manufacturing sector are mainly focused on serving the local demand, and often lack of product quality adversely affects the ability of the companies to compete on a regional/global basis. Further, lack of focus on identifying customer requirements often deters SMEs to envisage the idea of a New Product Development. The current study therefore focuses in utilizing QFD Matrix right from the conceptual design to detail design and to some extent, extending the link this to design of the manufacturing system. The outcome of the project resulted in a development of the prototype for a new molded product which can ensure consistency between the customer’s requirements and the measurable characteristics of the product. The Engineering Economics and Cost studies were also undertaken to analyse the viability of the new product, the results of which was also linked to the successful implementation of the initial QFD Matrix.

Keywords: Quality Function Deployment, QFD Matrix, new product development, NPD, Kuwait SMEs, prototype development

Procedia PDF Downloads 391
5393 NO2 Exposure Effect on the Occurrence of Pulmonary Dysfunction the Police Traffic in Jakarta

Authors: Bambang Wispriyono, Satria Pratama, Haryoto Kusnoputranto, Faisal Yunus, Meliana Sari

Abstract:

Introduction/objective: The impact of the development of motor vehicles is increasing the number of pollutants in the air. One of the substances that cause serious health problems is NO2. The health impacts arising from exposure to NO2 include pulmonary function impairment. The purpose of this study was to determine the relationship of NO2 exposure on the incidence of pulmonary function impairment. Methods: We are using a cross-sectional study design with 110 traffic police who were divided into two groups: exposed (police officers working on the highway) and the unexposed group (police officers working in the office). Election subject convenient sampling carried out in each group to the minimum number of samples met. Results: The results showed that the average NO2 in the exposed group was 18.72 ppb and unexposed group is 4.14 ppb. Pulmonary dysfunction on exposed and unexposed groups showed that FVC (Forced Vital Capacity) value are 88.68 and 90.27. And FEV1 (Forced Expiratory Volume in One) value are 94.9 and 95.16. Some variables like waist circumference, Body Mass Index, Visceral Fat, and Fat has associated with the incidence of Pulmonary Dysfunction (p < 0.05). Conclusion: Health monitoring is needed to decreasing health risk in Policeman.

Keywords: NO2, pulmonary dysfunction, police traffic, Jakarta

Procedia PDF Downloads 240
5392 An Improved K-Means Algorithm for Gene Expression Data Clustering

Authors: Billel Kenidra, Mohamed Benmohammed

Abstract:

Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.

Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization

Procedia PDF Downloads 175
5391 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players

Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa

Abstract:

Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.

Keywords: executive function, FPS, physical activity, prolonged sitting

Procedia PDF Downloads 214
5390 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 398
5389 Using Unilateral Diplomatic Assurances to Evade Provisional Measures' Orders

Authors: William Thomas Worster

Abstract:

This paper will highlight the failure of international adjudication to prevent a state from evading an order of provisional measures by simply issuing a diplomatic assurance to the court. This practice changes the positions of the litigants as equals before a court, prevents the court from inquiring into the reliability of the political pledge as it would with assurances from a state to an individual, and diminishes the court’s ability to control its own proceedings in the face of concerns over sovereignty. Both the European Court of Human Rights (ECtHR) and International Court of Justice (ICJ) will entertain these kinds of unilateral pledges, but they consider them differently when the declaration is made between states or between a state and an individual, and when made directly to the court. In short, diplomatic assurances issued between states or to individuals are usually considered not to be legally binding and are essentially questions of fact, but unilateral assurances issued directly to an international court are questions of law, and usually legally binding. At the same time, orders for provisional measures are now understood also to be legally binding, yet international courts will sometimes permit a state to substitute an assurance in place of an order for provisional measures. This emerging practice has brought the nature of a state as a sovereign capable of creating legal obligations into the forum of adjudication where the parties should have equality of arms and permitted states to create legal obligations that escape inquiry into the reliability of the outcome. While most recent practice has occurred at the ICJ in state-to-state litigation, there is some practice potentially extending the practice to human rights courts. Especially where the litigants are factually unequal – a state and an individual – this practice is problematic since states could more easily overcome factual failings in their pledges and evade the control of the court. Consider, for example, the potential for evading non-refoulement obligations by extending the current diplomatic assurances practice from the state-to-state context to the state-to-court context. The dual nature of assurances, as both legal and factual instruments, should be considered as addressed to distinct questions, each with its own considerations, and that we need to be more demanding about their precise legal and factual effects.

Keywords: unilateral, diplomacy, assurances, undertakings, provisional measures, interim measures

Procedia PDF Downloads 145
5388 Optimal Design of RC Pier Accompanied with Multi Sliding Friction Damping Mechanism Using Combination of SNOPT and ANN Method

Authors: Angga S. Fajar, Y. Takahashi, J. Kiyono, S. Sawada

Abstract:

The structural system concept of RC pier accompanied with multi sliding friction damping mechanism was developed based on numerical analysis approach. However in the implementation, to make design for such kind of this structural system consumes a lot of effort in case high of complexity. During making design, the special behaviors of this structural system should be considered including flexible small deformation, sufficient elastic deformation capacity, sufficient lateral force resistance, and sufficient energy dissipation. The confinement distribution of friction devices has significant influence to its. Optimization and prediction with multi function regression of this structural system expected capable of providing easier and simpler design method. The confinement distribution of friction devices is optimized with SNOPT in Opensees, while some design variables of the structure are predicted using multi function regression of ANN. Based on the optimization and prediction this structural system is able to be designed easily and simply.

Keywords: RC Pier, multi sliding friction device, optimal design, flexible small deformation

Procedia PDF Downloads 347
5387 Inventory Policy Above Country Level for Cooperating Countries for Vaccines

Authors: Aysun Pınarbaşı, Béla Vizvári

Abstract:

The countries are the units that procure the vaccines during the COVID-19 pandemic. The delivered quantities are huge. The countries must bear the inventory holding cost according to the variation of stock quantities. This cost depends on the speed of the vaccination in the country. This speed is time-dependent. The vaccinated portion of the population can be approximated by the cumulative distribution function of the Cauchy distribution. A model is provided for determining the minimal-cost inventory policy, and its optimality conditions are provided. The model is solved for 20 countries for different numbers of procurements. The results reveal the individual behavior of each country. We provide an inventory policy for the pandemic period for the countries. This paper presents a deterministic model for vaccines with a demand rate variable over time for the countries. It is aimed to provide an analytical model to deal with the minimization of holding cost and develop inventory policies regarding this aim to be used for a variety of perishable products such as vaccines. The saturation process is introduced, and an approximation of the vaccination curve of the countries has been discussed. According to this aspect, a deterministic model for inventory policy has been developed.

Keywords: covid-19, vaccination, inventory policy, bounded total demand, inventory holding cost, cauchy distribution, sigmoid function

Procedia PDF Downloads 61
5386 Extensions of Schwarz Lemma in the Half-Plane

Authors: Nicolae Pascu

Abstract:

Aside from being a fundamental tool in Complex analysis, Schwarz Lemma-which was finalized in its most complete form at the beginning of the last century-generated an important area of research in various fields of mathematics, which continues to advance even today. We present some properties of analytic functions in the half-plane which satisfy the conditions of the classical Schwarz Lemma (Carathéodory functions) and obtain a generalization of the well-known Aleksandrov-Sobolev Lemma for analytic functions in the half-plane (the correspondent of Schwarz-Pick Lemma from the unit disk). Using this Schwarz-type lemma, we obtain a characterization for the entire class of Carathéodory functions, which might be of independent interest. We prove two monotonicity properties for Carathéodory functions that do not depend upon their normalization at infinity (the hydrodynamic normalization). The method is based on conformal mapping arguments for analytic functions in the half-plane satisfying appropriate conditions, in the spirit of Schwarz lemma. According to the research findings in this paper, our main results give estimates for the modulus and the argument for the entire class of Carathéodory functions. As applications, we give several extensions of Julia-Wolf-Carathéodory Lemma in a half-strip and show that our results are sharp.

Keywords: schwarz lemma, Julia-wolf-caratéodory lemma, analytic function, normalization condition, caratéodory function

Procedia PDF Downloads 193
5385 Optimization of Territorial Spatial Functional Partitioning in Coal Resource-based Cities Based on Ecosystem Service Clusters - The Case of Gujiao City in Shanxi Province

Authors: Gu Sihao

Abstract:

The coordinated development of "ecology-production-life" in cities has been highly concerned by the country, and the transformation development and sustainable development of resource-based cities have become a hot research topic at present. As an important part of China's resource-based cities, coal resource-based cities have the characteristics of large number and wide distribution. However, due to the adjustment of national energy structure and the gradual exhaustion of urban coal resources, the development vitality of coal resource-based cities is gradually reduced. In many studies, the deterioration of ecological environment in coal resource-based cities has become the main problem restricting their urban transformation and sustainable development due to the "emphasis on economy and neglect of ecology". Since the 18th National Congress of the Communist Party of China (CPC), the Central Government has been deepening territorial space planning and development. On the premise of optimizing territorial space development pattern, it has completed the demarcation of ecological protection red lines, carried out ecological zoning and ecosystem evaluation, which have become an important basis and scientific guarantee for ecological modernization and ecological civilization construction. Grasp the regional multiple ecosystem services is the precondition of the ecosystem management, and the relationship between the multiple ecosystem services study, ecosystem services cluster can identify the interactions between multiple ecosystem services, and on the basis of the characteristics of the clusters on regional ecological function zoning, to better Social-Ecological system management. Based on this cognition, this study optimizes the spatial function zoning of Gujiao, a coal resource-based city, in order to provide a new theoretical basis for its sustainable development. This study is based on the detailed analysis of characteristics and utilization of Gujiao city land space, using SOFM neural networks to identify local ecosystem service clusters, according to the cluster scope and function of ecological function zoning of space partition balance and coordination between different ecosystem services strength, establish a relationship between clusters and land use, and adjust the functions of territorial space within each zone. Then, according to the characteristics of coal resources city and national spatial function zoning characteristics, as the driving factors of land change, by cellular automata simulation program, such as simulation under different restoration strategy situation of urban future development trend, and provides relevant theories and technical methods for the "third-line" demarcations of Gujiao's territorial space planning, optimizes territorial space functions, and puts forward targeted strategies for the promotion of regional ecosystem services, providing theoretical support for the improvement of human well-being and sustainable development of resource-based cities.

Keywords: coal resource-based city, territorial spatial planning, ecosystem service cluster, gmop model, geosos-FLUS model, functional zoning optimization and upgrading

Procedia PDF Downloads 45
5384 Designing Mobile Application to Motivate Young People to Visit Cultural Heritage Sites

Authors: Yuko Hiramatsu, Fumihiro Sato, Atsushi Ito, Hiroyuki Hatano, Mie Sato, Yu Watanabe, Akira Sasaki

Abstract:

This paper presents a mobile phone application developed for sightseeing in Nikko, one of the cultural world heritages in Japan, using the BLE (Bluetooth Low Energy) beacon. Based on our pre-research, we decided to design our application for young people who walk around the area actively, but know little about the tradition and culture of Nikko. One solution is to construct many information boards to explain; however, it is difficult to construct new guide plates in cultural world heritage sites. The smartphone is a good solution to send such information to such visitors. This application was designed using a combination of the smartphone and beacons, set in the area, so that when a tourist passes near a beacon, the application displays information about the area including a map, historical or cultural information about the temples and shrines, and local shops nearby as well as a bus timetable. It is useful for foreigners, too. In addition, we developed quizzes relating to the culture and tradition of Nikko to provide information based on the Zeigarnik effect, a psychological effect. According to the results of our trials, tourists positively evaluated the basic information and young people who used the quiz function were able to learn the historical and cultural points. This application helped young visitors at Nikko to understand the cultural elements of the site. In addition, this application has a function to send notifications. This function is designed to provide information about the local community such as shops, local transportation companies and information office. The application hopes to also encourage people living in the area, and such cooperation from the local people will make this application vivid and inspire young visitors to feel that the cultural heritage site is still alive today. This is a gateway for young people to learn about a traditional place and understand the gravity of preserving such areas.

Keywords: BLE beacon, smartphone application, Zeigarnik effect, world heritage site, school trip

Procedia PDF Downloads 307
5383 A New Criterion Using Pose and Shape of Objects for Collision Risk Estimation

Authors: DoHyeung Kim, DaeHee Seo, ByungDoo Kim, ByungGil Lee

Abstract:

As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.

Keywords: collision risk, pose, shape, fuzzy logic

Procedia PDF Downloads 506
5382 Bayesian Value at Risk Forecast Using Realized Conditional Autoregressive Expectiel Mdodel with an Application of Cryptocurrency

Authors: Niya Chen, Jennifer Chan

Abstract:

In the financial market, risk management helps to minimize potential loss and maximize profit. There are two ways to assess risks; the first way is to calculate the risk directly based on the volatility. The most common risk measurements are Value at Risk (VaR), sharp ratio, and beta. Alternatively, we could look at the quantile of the return to assess the risk. Popular return models such as GARCH and stochastic volatility (SV) focus on modeling the mean of the return distribution via capturing the volatility dynamics; however, the quantile/expectile method will give us an idea of the distribution with the extreme return value. It will allow us to forecast VaR using return which is direct information. The advantage of using these non-parametric methods is that it is not bounded by the distribution assumptions from the parametric method. But the difference between them is that expectile uses a second-order loss function while quantile regression uses a first-order loss function. We consider several quantile functions, different volatility measures, and estimates from some volatility models. To estimate the expectile of the model, we use Realized Conditional Autoregressive Expectile (CARE) model with the bayesian method to achieve this. We would like to see if our proposed models outperform existing models in cryptocurrency, and we will test it by using Bitcoin mainly as well as Ethereum.

Keywords: expectile, CARE Model, CARR Model, quantile, cryptocurrency, Value at Risk

Procedia PDF Downloads 94
5381 Generalized Correlation Coefficient in Genome-Wide Association Analysis of Cognitive Ability in Twins

Authors: Afsaneh Mohammadnejad, Marianne Nygaard, Jan Baumbach, Shuxia Li, Weilong Li, Jesper Lund, Jacob v. B. Hjelmborg, Lene Christensen, Qihua Tan

Abstract:

Cognitive impairment in the elderly is a key issue affecting the quality of life. Despite a strong genetic background in cognition, only a limited number of single nucleotide polymorphisms (SNPs) have been found. These explain a small proportion of the genetic component of cognitive function, thus leaving a large proportion unaccounted for. We hypothesize that one reason for this missing heritability is the misspecified modeling in data analysis concerning phenotype distribution as well as the relationship between SNP dosage and the phenotype of interest. In an attempt to overcome these issues, we introduced a model-free method based on the generalized correlation coefficient (GCC) in a genome-wide association study (GWAS) of cognitive function in twin samples and compared its performance with two popular linear regression models. The GCC-based GWAS identified two genome-wide significant (P-value < 5e-8) SNPs; rs2904650 near ZDHHC2 on chromosome 8 and rs111256489 near CD6 on chromosome 11. The kinship model also detected two genome-wide significant SNPs, rs112169253 on chromosome 4 and rs17417920 on chromosome 7, whereas no genome-wide significant SNPs were found by the linear mixed model (LME). Compared to the linear models, more meaningful biological pathways like GABA receptor activation, ion channel transport, neuroactive ligand-receptor interaction, and the renin-angiotensin system were found to be enriched by SNPs from GCC. The GCC model outperformed the linear regression models by identifying more genome-wide significant genetic variants and more meaningful biological pathways related to cognitive function. Moreover, GCC-based GWAS was robust in handling genetically related twin samples, which is an important feature in handling genetic confounding in association studies.

Keywords: cognition, generalized correlation coefficient, GWAS, twins

Procedia PDF Downloads 106
5380 Lateralisation of Visual Function in Yellow-Eyed Mullet (Aldrichetta forsteri) and Its Role in Schooling Behaviour

Authors: Karen L. Middlemiss, Denham G. Cook, Peter Jaksons, Alistair Jerrett, William Davison

Abstract:

Lateralisation of cognitive function is a common phenomenon found throughout the animal kingdom. Strong biases in functional behaviours have evolved from asymmetrical brain hemispheres which differ in structure and/or cognitive function. In fish, lateralisation is involved in visually mediated behaviours such as schooling, predator avoidance, and foraging, and is considered to have a direct impact on species fitness. Currently, there is very little literature on the role of lateralisation in fish schools. The yellow-eyed mullet (Aldrichetta forsteri), is an estuarine and coastal species found commonly throughout temperate regions of Australia and New Zealand. This study sought to quantify visually mediated behaviours in yellow-eyed mullet to identify the significance of lateralisation, and the factors which influence functional behaviours in schooling fish. Our approach to study design was to conduct a series of tank based experiments investigating; a) individual and population level lateralisation, b) schooling behaviour, and d) optic lobe anatomy. Yellow-eyed mullet showed individual variation in direction and strength of lateralisation in juveniles, and trait specific spatial positioning within the school was evidenced in strongly lateralised fish. In combination with observed differences in schooling behaviour, the possibility of ontogenetic plasticity in both behavioural lateralisation and optic lobe morphology in adults is suggested. These findings highlight the need for research into the genetic and environmental factors (epigenetics) which drive functional behaviours such as schooling, feeding and aggression. Improved knowledge on collective behaviour could have significant benefits to captive rearing programmes through improved culture techniques and will add to the limited body of knowledge on the complex ecophysiological interactions present in our inshore fisheries.

Keywords: cerebral asymmetry, fisheries, schooling, visual bias

Procedia PDF Downloads 196
5379 On the Internal Structure of the ‘Enigmatic Electrons’

Authors: Natarajan Tirupattur Srinivasan

Abstract:

Quantum mechanics( QM) and (special) relativity (SR) have indeed revolutionized the very thinking of physicists, and the spectacular successes achieved over a century due to these two theories are mind-boggling. However, there is still a strong disquiet among some physicists. While the mathematical structure of these two theories has been established beyond any doubt, their physical interpretations are still being contested by many. Even after a hundred years of their existence, we cannot answer a very simple question, “What is an electron”? Physicists are struggling even now to come to grips with the different interpretations of quantum mechanics with all their ramifications. However, it is indeed strange that the (special) relativity theory of Einstein enjoys many orders of magnitude of “acceptance”, though both theories have their own stocks of weirdness in the results, like time dilation, mass increase with velocity, the collapse of the wave function, quantum jump, tunnelling, etc. Here, in this paper, it would be shown that by postulating an intrinsic internal motion to these enigmatic electrons, one can build a fairly consistent picture of reality, revealing a very simple picture of nature. This is also evidenced by Schrodinger’s ‘Zitterbewegung’ motion, about which so much has been written. This leads to a helical trajectory of electrons when they move in a laboratory frame. It will be shown that the helix is a three-dimensional wave having all the characteristics of our familiar 2D wave. Again, the helix, being a geodesic on an imaginary cylinder, supports ‘quantization’, and its representation is just the complex exponentials matching with the wave function of quantum mechanics. By postulating the instantaneous velocity of the electrons to be always ‘c’, the velocity of light, the entire relativity comes alive, and we can interpret the ‘time dilation’, ‘mass increase with velocity’, etc., in a very simple way. Thus, this model unifies both QM and SR without the need for a counterintuitive postulate of Einstein about the constancy of the velocity of light for all inertial observers. After all, if the motion of an inertial frame cannot affect the velocity of light, the converse that this constant also cannot affect the events in the frame must be true. But entire relativity is about how ‘c’ affects time, length, mass, etc., in different frames.

Keywords: quantum reconstruction, special theory of relativity, quantum mechanics, zitterbewegung, complex wave function, helix, geodesic, Schrodinger’s wave equations

Procedia PDF Downloads 50
5378 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 257
5377 Deep Neural Networks for Restoration of Sky Images Affected by Static and Anisotropic Aberrations

Authors: Constanza A. Barriga, Rafael Bernardi, Amokrane Berdja, Christian D. Guzman

Abstract:

Most image restoration methods in astronomy rely upon probabilistic tools that infer the best solution for a deconvolution problem. They achieve good performances when the point spread function (PSF) is spatially invariable in the image plane. However, this latter condition is not always satisfied with real optical systems. PSF angular variations cannot be evaluated directly from the observations, neither be corrected at a pixel resolution. We have developed a method for the restoration of images affected by static and anisotropic aberrations using deep neural networks that can be directly applied to sky images. The network is trained using simulated sky images corresponding to the T-80 telescope optical system, an 80 cm survey imager at Cerro Tololo (Chile), which are synthesized using a Zernike polynomial representation of the optical system. Once trained, the network can be used directly on sky images, outputting a corrected version of the image, which has a constant and known PSF across its field-of-view. The method was tested with the T-80 telescope, achieving better results than with PSF deconvolution techniques. We present the method and results on this telescope.

Keywords: aberrations, deep neural networks, image restoration, variable point spread function, wide field images

Procedia PDF Downloads 121
5376 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 307
5375 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 98
5374 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 127
5373 A Multivariate 4/2 Stochastic Covariance Model: Properties and Applications to Portfolio Decisions

Authors: Yuyang Cheng, Marcos Escobar-Anel

Abstract:

This paper introduces a multivariate 4/2 stochastic covariance process generalizing the one-dimensional counterparts presented in Grasselli (2017). Our construction permits stochastic correlation not only among stocks but also among volatilities, also known as co-volatility movements, both driven by more convenient 4/2 stochastic structures. The parametrization is flexible enough to separate these types of correlation, permitting their individual study. Conditions for proper changes of measure and closed-form characteristic functions under risk-neutral and historical measures are provided, allowing for applications of the model to risk management and derivative pricing. We apply the model to an expected utility theory problem in incomplete markets. Our analysis leads to closed-form solutions for the optimal allocation and value function. Conditions are provided for well-defined solutions together with a verification theorem. Our numerical analysis highlights and separates the impact of key statistics on equity portfolio decisions, in particular, volatility, correlation, and co-volatility movements, with the latter being the least important in an incomplete market.

Keywords: stochastic covariance process, 4/2 stochastic volatility model, stochastic co-volatility movements, characteristic function, expected utility theory, veri cation theorem

Procedia PDF Downloads 135
5372 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 134
5371 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 190
5370 Functional Beverage to Boosting Immune System in Elderly

Authors: Adineh Tajmousavilangerudi, Ali Zein Alabiden Tlais, Raffaella Di Cagno

Abstract:

The SARS-Cov-2 pandemic has exposed our vulnerability to new illnesses and novel viruses that attack our immune systems, particularly in the elderly. The vaccine is being gradually introduced over the world, but new strains of the virus and COVID-19 will emerge and continue to cause illness. Aging is associated with significant changes in intestinal physiology, which increases the production of inflammatory products, alters the gut microbiota, and consequently establish inadequate immune response to minimize symptoms and disease development. In this context, older people who followed a Mediterranean-style diet, rich in polyphenols and dietary fiber, performed better physically and mentally (1,2). This demonstrates the importance of the human gut microbiome in transforming complex dietary macromolecules into the most biologically available and active nutrients, which in turn help to regulate metabolism and both intestinal and systemic immune function (3,4). The role of lactic acid fermentation is prominent also as a powerful tool for improving the nutritional quality of the human diet by releasing nutrients and boosting the complex bioactive compounds and vitamin content. the PhD project aims to design fermented and functional foods/beverages capable of modulating human immune function via the gut microbiome.

Keywords: functional bevarage, fermented beverage, gut microbiota functionality, immun system

Procedia PDF Downloads 93
5369 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 115
5368 Frequency of Consonant Production Errors in Children with Speech Sound Disorder: A Retrospective-Descriptive Study

Authors: Amulya P. Rao, Prathima S., Sreedevi N.

Abstract:

Speech sound disorders (SSD) encompass the major concern in younger population of India with highest prevalence rate among the speech disorders. Children with SSD if not identified and rehabilitated at the earliest, are at risk for academic difficulties. This necessitates early identification using screening tools assessing the frequently misarticulated speech sounds. The literature on frequently misarticulated speech sounds is ample in English and other western languages targeting individuals with various communication disorders. Articulation is language specific, and there are limited studies reporting the same in Kannada, a Dravidian Language. Hence, the present study aimed to identify the frequently misarticulated consonants in Kannada and also to examine the error type. A retrospective, descriptive study was carried out using secondary data analysis of 41 participants (34-phonetic type and 7-phonemic type) with SSD in the age range 3-to 12-years. All the consonants of Kannada were analyzed by considering three words for each speech sound from the Kannada Diagnostic Photo Articulation test (KDPAT). Picture naming task was carried out, and responses were audio recorded. The recorded data were transcribed using IPA 2018 broad transcription. A criterion of 2/3 or 3/3 error productions was set to consider the speech sound to be an error. Number of error productions was calculated for each consonant in each participant. Then, the percentage of participants meeting the criteria were documented for each consonant to identify the frequently misarticulated speech sound. Overall results indicated that velar /k/ (48.78%) and /g/ (43.90%) were frequently misarticulated followed by voiced retroflex /ɖ/ (36.58%) and trill /r/ (36.58%). The lateral retroflex /ɭ/ was misarticulated by 31.70% of the children with SSD. Dentals (/t/, /n/), bilabials (/p/, /b/, /m/) and labiodental /v/ were produced correctly by all the participants. The highly misarticulated velars /k/ and /g/ were frequently substituted by dentals /t/ and /d/ respectively or omitted. Participants with SSD-phonemic type had multiple substitutions for one speech sound whereas, SSD-phonetic type had consistent single sound substitutions. Intra- and inter-judge reliability for 10% of the data using Cronbach’s Alpha revealed good reliability (0.8 ≤ α < 0.9). Analyzing a larger sample by replicating such studies will validate the present study results.

Keywords: consonant, frequently misarticulated, Kannada, SSD

Procedia PDF Downloads 107
5367 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 151