Search results for: logistic model tree
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17991

Search results for: logistic model tree

10131 Tatak Noy-Pi: The Branding Evolution of Tesoro's Philippine Handicrafts- A Philippines Creative and Cultural Industry

Authors: Regine R. Villanueva

Abstract:

The study looks into how a cultural industry such as Tesoro’s Philippine Handicrafts underwent the brand revitalization process throughout its 70 years of existence in the Philippine market. This study uses a historical approach which analyzes the changes in product development and promotional strategies. Similarly, its brand identity was determined as well in terms of its internal processes and archival data such as history, mission – vision, customer relations, products, and promotions. The product life cycle model and the brand identity planning model were used as theoretical framework for the study. The life cycle was used in historically tracing the company’s developments and changes in terms of its branding, more specifically the products, promotions, and identity. Interviews were conducted among informants who included the CEO and the heads of each department in the business. The researcher also utilized textual analysis to have an in-depth understanding of Tesoro’s’ brand identity portrayal through its advertisements. The results showed how the company has undergone a progressive and innovative transition in its life cycle. With the changing markets and increased competition, the brand started active promotions and engaged in product development. In terms of identity, they are branded as pioneers of the handicraft industry in the Philippines. They started their brand revitalization to be able to imbibe this identity to their consumers through advertisement communication and identifying their segmented markets.

Keywords: cultural industry, handicrafts, case study, philippines

Procedia PDF Downloads 622
10130 Geomorphology of Leyte, Philippines: Seismic Response and Remote Sensing Analysis and Its Implication to Landslide Hazard Assessment

Authors: Arturo S. Daag, Ira Karrel D. L. San Jose, Mike Gabriel G. Pedrosa, Ken Adrian C. Villarias, Rayfred P. Ingeniero, Cyrah Gale H. Rocamora, Margarita P. Dizon, Roland Joseph B. De Leon, Teresito C. Bacolcol

Abstract:

The province of Leyte consists of various geomorphological landforms: These are: a) landforms of tectonic origin transect large part of the volcanic centers in upper Ormoc area; b) landforms of volcanic origin, several inactive volcanic centers located in Upper Ormoc are transected by Philippine Fault; c) landforms of volcano-denudational and denudational slopes dominates the area where most of the earthquake-induced landslide occurred; and d) Colluvium and alluvial deposits dominate the foot slope of Ormoc and Jaro-Pastrana plain. Earthquake ground acceleration and geotechnical properties of various landforms are crucial for landslide studies. To generate the landslide critical acceleration model of sliding block, various data were considered, these are: geotechnical data (i.e., soil and rock strength parameters), slope, topographic wetness index (TWI), landslide inventory, soil map, geologic maps for the calculation of the factor of safety. Horizontal-to-vertical spectral ratio (HVSR) surveying methods, refraction microtremor (ReMi), and three-component microtremor (3CMT) were conducted to measure site period and surface wave velocity as well as to create a soil thickness model. Critical acceleration model of various geomorphological unit using Remote Sensing, field geotechnical, geophysical, and geospatial data collected from the areas affected by the 06 July 2017 M6.5 Leyte earthquake. Spatial analysis of earthquake-induced landslide from the 06 July 2017, were then performed to assess the relationship between the calculated critical acceleration and peak ground acceleration. The observed trends proved helpful in establishing the role of critical acceleration as a determining factor in the distribution of co-seismic landslides.

Keywords: earthquake-induced landslide, remote sensing, geomorphology, seismic response

Procedia PDF Downloads 128
10129 Model of Community Management for Sustainable Utilization

Authors: Luedech Girdwichai, Withaya Mekhum

Abstract:

This research intended to develop the model of community management for sustainable utilization by investigating on 2 groups of population, the family heads and the community management team. The population of the former group consisted of family heads from 511 families in 12 areas to complete the questionnaires which were returned at 479 sets. The latter group consisted of the community management team of 12 areas with 1 representative from each area to give the interview. The questionnaires for the family heads consisted of 2 main parts; general information such as occupations, etc. in the form of checklist. The second part dealt with the data on self reliance community development based on 4P Framework, i.e., People (human resource) development, Place (area) development, Product (economic and income source) development, and Plan (community plan) development in the form of rating scales. Data in the 1st part were calculated to find frequency and percentage while those in the 2nd part were analyzed to find arithmetic mean and SD. Data from the 2nd group of population or the community management team were derived from focus group to find factors influencing successful management together with the in depth interview which were analyzed by descriptive statistics. The results showed that 479 family heads reported that the aspect on the implementation of community plan to self reliance community activities based on Sufficient Economy Philosophy and the 4P was at the average of 3.28 or moderate level. When considering in details, it was found that the 1st aspect was on the area development with the mean of 3.71 or high level followed by human resource development with the mean of 3.44 or moderate level, then, economic and source of income development with the mean of 3.09 or moderate level. The last aspect was community plan development with the mean of 2.89. The results from the small group discussion revealed some factors and guidelines for successful community management as follows: 1) on the People (human resource) development aspect, there was a project to support and develop community leaders. 2) On the aspect of Place (area) development, there was a development on conservative tourism areas. 3) On the aspect of Product (economic and source of income) development, the community leaders promoted the setting of occupational group, saving group, and product processing group. 4) On the aspect of Plan (community plan) development, there was a prioritization through public hearing.

Keywords: model of community management, sustainable utilization, family heads, community management team

Procedia PDF Downloads 336
10128 Earthquake Relocations and Constraints on the Lateral Velocity Variations along the Gulf of Suez, Using the Modified Joint Hypocenter Method Determination

Authors: Abu Bakr Ahmed Shater

Abstract:

Hypocenters of 250 earthquakes recorded by more than 5 stations from the Egyptian seismic network around the Gulf of Suez were relocated and the seismic stations correction for the P-wave is estimated, using the modified joint hypocenter method determination. Five stations TR1, SHR, GRB, ZAF and ZET have minus signs in the station P-wave travel time corrections and their values are -0.235, -0.366, -0.288, -0.366 and -0.058, respectively. It is possible to assume that, the underground model in this area has a particular characteristic of high velocity structure in which the other stations TR2, RDS, SUZ, HRG and ZNM have positive signs and their values are 0.024, 0.187, 0.314, 0.645 and 0.145, respectively. It is possible to assume that, the underground model in this area has particular characteristic of low velocity structure. The hypocenteral location determined by the Modified joint hypocenter method is more precise than those determined by the other routine work program. This method simultaneously solves the earthquake locations and station corrections. The station corrections reflect, not only the different crustal conditions in the vicinity of the stations, but also the difference between the actual and modeled seismic velocities along each of the earthquake - station ray paths. The stations correction obtained is correlated with the major surface geological features in the study area. As a result of the relocation, the low velocity area appears in the northeastern and southwestern sides of the Gulf of Suez, while the southeastern and northwestern parts are of high velocity area.

Keywords: gulf of Suez, seismicity, relocation of hypocenter, joint hypocenter determination

Procedia PDF Downloads 357
10127 Engineering the Topological Insulator Structures for Terahertz Detectors

Authors: M. Marchewka

Abstract:

The article is devoted to the possible optical transitions in double quantum wells system based on HgTe/HgCd(Mn)Te heterostructures. Such structures can find applications as detectors and sources of radiation in the terahertz range. The Double Quantum Wells (DQW) systems consist of two QWs separated by the transparent for electrons barrier. Such systems look promising from the point of view of the additional degrees of freedom. In the case of the topological insulator in about 6.4nm wide HgTe QW or strained 3D HgTe films at the interfaces, the topologically protected surface states appear at the interfaces/surfaces. Electrons in those edge states move along the interfaces/surfaces without backscattering due to time-reversal symmetry. Combination of the topological properties, which was already verified by the experimental way, together with the very well know properties of the DQWs, can be very interesting from the applications point of view, especially in the THz area. It is important that at the present stage, the technology makes it possible to create high-quality structures of this type, and intensive experimental and theoretical studies of their properties are already underway. The idea presented in this paper is based on the eight-band KP model, including the additional terms related to the structural inversion asymmetry, interfaces inversion asymmetry, the influence of the magnetically content, and the uniaxial strain describe the full pictures of the possible real structure. All of this term, together with the external electric field, can be sources of breaking symmetry in investigated materials. Using the 8 band KP model, we investigated the electronic shape structure with and without magnetic field from the application point of view as a THz detector in a small magnetic field (below 2T). We believe that such structures are the way to get the tunable topological insulators and the multilayer topological insulator. Using the one-dimensional electrons at the topologically protected interface states as fast and collision-free signal carriers as charge and signal carriers, the detection of the optical signal should be fast, which is very important in the high-resolution detection of signals in the THz range. The proposed engineering of the investigated structures is now one of the important steps on the way to get the proper structures with predicted properties.

Keywords: topological insulator, THz spectroscopy, KP model, II-VI compounds

Procedia PDF Downloads 116
10126 Designing a Cricket Team Selection Method Using Super-Efficient DEA and Semi Variance Approach

Authors: Arnab Adhikari, Adrija Majumdar, Gaurav Gupta, Arnab Bisi

Abstract:

Team formation plays an instrumental role in the sports like cricket. Existing literature reveals that most of the works on player selection focus only on the players’ efficiency and ignore the consistency. It motivates us to design an improved player selection method based on both player’s efficiency and consistency. To measure the players’ efficiency measurement, we employ a modified data envelopment analysis (DEA) technique namely ‘super-efficient DEA model’. We design a modified consistency index based on semi variance approach. Here, we introduce a new parameter called ‘fitness index’ for consistency computation to assess a player’s fitness level. Finally, we devise a single performance score using both efficiency score and consistency score with the help of a linear programming model. To test the robustness of our method, we perform a rigorous numerical analysis to determine the all-time best One Day International (ODI) Cricket XI. Next, we conduct extensive comparative studies regarding efficiency scores, consistency scores, selected team between the existing methods and the proposed method and explain the rationale behind the improvement.

Keywords: decision support systems, sports, super-efficient data envelopment analysis, semi variance approach

Procedia PDF Downloads 397
10125 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group

Authors: Antonio Martín-Pardo

Abstract:

With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.

Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation

Procedia PDF Downloads 119
10124 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective

Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh

Abstract:

The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.

Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain

Procedia PDF Downloads 109
10123 Dynamics Pattern of Land Use and Land Cover Change and Its Driving Factors Based on a Cellular Automata Markov Model: A Case Study at Ibb Governorate, Yemen

Authors: Abdulkarem Qasem Dammag, Basema Qasim Dammag, Jian Dai

Abstract:

Change in Land use and Land cover (LU/LC) has a profound impact on the area's natural, economic, and ecological development, and the search for drivers of land cover change is one of the fundamental issues of LU/LC change. The study aimed to assess the temporal and Spatio-temporal dynamics of LU/LC in the past and to predict the future using Landsat images by exploring the characteristics of different LU/LC types. Spatio-temporal patterns of LU/LC change in Ibb Governorate, Yemen, were analyzed based on RS and GIS from 1990, 2005, and 2020. A socioeconomic survey and key informant interviews were used to assess potential drivers of LU/LC. The results showed that from 1990 to 2020, the total area of vegetation land decreased by 5.3%, while the area of barren land, grassland, built-up area, and waterbody increased by 2.7%, 1.6%, 1.04%, and 0.06%, respectively. Based on socio-economic surveys and key informant interviews, natural factors had a significant and long-term impact on land change. In contrast, site construction and socio-economic factors were the main driving forces affecting land change in a short time scale. The analysis results have been linked to the CA-Markov Land Use simulation and forecasting model for the years 2035 and 2050. The simulation results revealed from the period 2020 to 2050, the trend of dynamic changes in land use, where the total area of barren land decreased by 7.0% and grassland by 0.2%, while the vegetation land, built-up area, and waterbody increased by 4.6%, 2.6%, and 0.1 %, respectively. Overall, these findings provide LULC's past and future trends and identify drivers, which can play an important role in sustainable land use planning and management by balancing and coordinating urban growth and land use and can also be used at the regional level in different levels to provide as a reference. In addition, the results provide scientific guidance to government departments and local decision-makers in future land-use planning through dynamic monitoring of LU/LC change.

Keywords: LU/LC change, CA-Markov model, driving forces, change detection, LU/LC change simulation

Procedia PDF Downloads 63
10122 Supercomputer Simulation of Magnetic Multilayers Films

Authors: Vitalii Yu. Kapitan, Aleksandr V. Perzhu, Konstantin V. Nefedev

Abstract:

The necessity of studying magnetic multilayer structures is explained by the prospects of their practical application as a technological base for creating new storages medium. Magnetic multilayer films have many unique features that contribute to increasing the density of information recording and the speed of storage devices. Multilayer structures are structures of alternating magnetic and nonmagnetic layers. In frame of the classical Heisenberg model, lattice spin systems with direct short- and long-range exchange interactions were investigated by Monte Carlo methods. The thermodynamic characteristics of multilayer structures, such as the temperature behavior of magnetization, energy, and heat capacity, were investigated. The processes of magnetization reversal of multilayer structures in external magnetic fields were investigated. The developed software is based on the new, promising programming language Rust. Rust is a new experimental programming language developed by Mozilla. The language is positioned as an alternative to C and C++. For the Monte Carlo simulation, the Metropolis algorithm and its parallel implementation using MPI and the Wang-Landau algorithm were used. We are planning to study of magnetic multilayer films with asymmetric Dzyaloshinskii–Moriya (DM) interaction, interfacing effects and skyrmions textures. This work was supported by the state task of the Ministry of Education and Science of the Russia # 3.7383.2017/8.9

Keywords: The Monte Carlo methods, Heisenberg model, multilayer structures, magnetic skyrmion

Procedia PDF Downloads 164
10121 Recognizing Customer Preferences Using Review Documents: A Hybrid Text and Data Mining Approach

Authors: Oshin Anand, Atanu Rakshit

Abstract:

The vast increment in the e-commerce ventures makes this area a prominent research stream. Besides several quantified parameters, the textual content of reviews is a storehouse of many information that can educate companies and help them earn profit. This study is an attempt in this direction. The article attempts to categorize data based on a computed metric that quantifies the influencing capacity of reviews rendering two categories of high and low influential reviews. Further, each of these document is studied to conclude several product feature categories. Each of these categories along with the computed metric is converted to linguistic identifiers and are used in an association mining model. The article makes a novel attempt to combine feature attraction with quantified metric to categorize review text and finally provide frequent patterns that depict customer preferences. Frequent mentions in a highly influential score depict customer likes or preferred features in the product whereas prominent pattern in low influencing reviews highlights what is not important for customers. This is achieved using a hybrid approach of text mining for feature and term extraction, sentiment analysis, multicriteria decision-making technique and association mining model.

Keywords: association mining, customer preference, frequent pattern, online reviews, text mining

Procedia PDF Downloads 387
10120 Implementation of a Lattice Boltzmann Method for Pulsatile Flow with Moment Based Boundary Condition

Authors: Zainab A. Bu Sinnah, David I. Graham

Abstract:

The Lattice Boltzmann Method has been developed and used to simulate both steady and unsteady fluid flow problems such as turbulent flows, multiphase flow and flows in the vascular system. As an example, the study of blood flow and its properties can give a greater understanding of atherosclerosis and the flow parameters which influence this phenomenon. The blood flow in the vascular system is driven by a pulsating pressure gradient which is produced by the heart. As a very simple model of this, we simulate plane channel flow under periodic forcing. This pulsatile flow is essentially the standard Poiseuille flow except that the flow is driven by the periodic forcing term. Moment boundary conditions, where various moments of the particle distribution function are specified, are applied at solid walls. We used a second-order single relaxation time model and investigated grid convergence using two distinct approaches. In the first approach, we fixed both Reynolds and Womersley numbers and varied relaxation time with grid size. In the second approach, we fixed the Womersley number and relaxation time. The expected second-order convergence was obtained for the second approach. For the first approach, however, the numerical method converged, but not necessarily to the appropriate analytical result. An explanation is given for these observations.

Keywords: Lattice Boltzmann method, single relaxation time, pulsatile flow, moment based boundary condition

Procedia PDF Downloads 230
10119 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.

Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization

Procedia PDF Downloads 126
10118 Concentration and Stability of Fatty Acids and Ammonium in the Samples from Mesophilic Anaerobic Digestion

Authors: Mari Jaakkola, Jasmiina Haverinen, Tiina Tolonen, Vesa Virtanen

Abstract:

These process monitoring of biogas plant gives valuable information of the function of the process and help to maintain a stable process. The costs of basic monitoring are often much lower than the costs associated with re-establishing a biologically destabilised plant. Reactor acidification through reactor overload is one of the most common reasons for process deterioration in anaerobic digesters. This occurs because of a build-up of volatile fatty acids (VFAs) produced by acidogenic and acetogenic bacteria. VFAs cause pH values to decrease, and result in toxic conditions in the reactor. Ammonia ensures an adequate supply of nitrogen as a nutrient substance for anaerobic biomass and increases system's buffer capacity, counteracting acidification lead by VFA production. However, elevated ammonia concentration is detrimental to the process due to its toxic effect. VFAs are considered the most reliable analytes for process monitoring. To obtain accurate results, sample storage and transportation need to be carefully controlled. This may be a challenge for off-line laboratory analyses especially when the plant is located far away from the laboratory. The aim of this study was to investigate the correlation between fatty acids, ammonium, and bacteria in the anaerobic digestion samples obtained from an industrial biogas factory. The stability of the analytes was studied comparing the results of the on-site analyses performed in the factory site to the results of the samples stored at room temperature and -18°C (up to 30 days) after sampling. Samples were collected in the biogas plant consisting of three separate mesofilic AD reactors (4000 m³ each) where the main feedstock was swine slurry together with a complex mixture of agricultural plant and animal wastes. Individual VFAs, ammonium, and nutrients (K, Ca, Mg) were studied by capillary electrophoresis (CE). Longer chain fatty acids (oleic, hexadecanoic, and stearic acids) and bacterial profiles were studied by GC-MSD (Gas Chromatography-Mass Selective Detector) and 16S rDNA, respectively. On-site monitoring of the analytes was performed by CE. The main VFA in all samples was acetic acid. However, in one reactor sample elevated levels of several individual VFAs and long chain fatty acids were detected. Also bacterial profile of this sample differed from the profiles of other samples. Acetic acid decomposed fast when the sample was stored in a room temperature. All analytes were stable when stored in a freezer. Ammonium was stable even at a room temperature for the whole testing period. One reactor sample had higher concentration of VFAs and long chain fatty acids than other samples. CE was utilized successfully in the on-site analysis of separate VFAs and NH₄ in the biogas production site. Samples should be analysed in the sampling day if stored in RT or freezed for longer storage time. Fermentation reject can be stored (and transported) at ambient temperature at least for one month without loss of NH₄. This gives flexibility to the logistic solutions when reject is used as a fertilizer.

Keywords: anaerobic digestion, capillary electrophoresis, ammonium, bacteria

Procedia PDF Downloads 168
10117 Three-Dimensional CFD Modeling of Flow Field and Scouring around Bridge Piers

Authors: P. Deepak Kumar, P. R. Maiti

Abstract:

In recent years, sediment scour near bridge piers and abutment is a serious problem which causes nationwide concern because it has resulted in more bridge failures than other causes. Scour is the formation of scour hole around the structure mounted on and embedded in erodible channel bed due to the erosion of soil by flowing water. The formation of scour hole around the structures depends upon shape and size of the pier, depth of flow as well as angle of attack of flow and sediment characteristics. The flow characteristics around these structures change due to man-made obstruction in the natural flow path which changes the kinetic energy of the flow around these structures. Excessive scour affects the stability of the foundation of the structure by the removal of the bed material. The accurate estimation of scour depth around bridge pier is very difficult. The foundation of bridge piers have to be taken deeper and to provide sufficient anchorage length required for stability of the foundation. In this study, computational model simulations using a 3D Computational Fluid Dynamics (CFD) model were conducted to examine the mechanism of scour around a cylindrical pier. Subsequently, the flow characteristics around these structures are presented for different flow conditions. Mechanism of scouring phenomenon, the formation of vortex and its consequent effect is discussed for a straight channel. Effort was made towards estimation of scour depth around bridge piers under different flow conditions.

Keywords: bridge pier, computational fluid dynamics, multigrid, pier shape, scour

Procedia PDF Downloads 295
10116 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 165
10115 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 85
10114 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate

Authors: A. Majeed, P. Ali

Abstract:

Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.

Keywords: artificial neural network, load estimation, regional survey, rural electrification

Procedia PDF Downloads 122
10113 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine

Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy

Abstract:

This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.

Keywords: CFD model, combustion, engine, simulation

Procedia PDF Downloads 359
10112 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 78
10111 Inverted Diameter-Limit Thinning: A Promising Alternative for Mixed Populus tremuloides Stands Management

Authors: Ablo Paul Igor Hounzandji, Benoit Lafleur, Annie DesRochers

Abstract:

Introduction: Populus tremuloides [Michx] regenerates rapidly and abundantly by root suckering after harvest, creating stands with interconnected stems. Pre-commercial thinning can be used to concentrate growth on fewer stems to reach merchantability faster than un-thinned stands. However, conventional thinning methods are typically designed to reach even spacing between residual stems (1,100 stem ha⁻¹, evenly distributed), which can lead to treated stands consisting of weaker/smaller stems compared to the original stands. Considering the nature of P. tremuloides's regeneration, with large underground biomass of interconnected roots, aiming to keep the most vigorous and largest stems, regardless of their spatial distribution, inverted diameter-limit thinning could be more beneficial to post-thinning stand productivity because it would reduce the imbalance between roots and leaf area caused by thinning. Aims: This study aimed to compare stand and stem productivity of P. tremuloides stands thinned with a conventional thinning treatment (CT; 1,100 stem ha⁻¹, evenly distributed), two levels of inverted diameter-limit thinning (DL1 and DL2, keeping the largest 1100 or 2200 stems ha⁻¹, respectively, regardless of their spatial distribution) and a control unthinned treatment. Because DL treatments can create substantial or frequent gaps in the thinned stands, we also aimed to evaluate the potential of this treatment to recreate mixed conifer-broadleaf stands by fill-planting Picea glauca seedlings. Methods: Three replicate 21 year-old sucker-regenerated aspen stands were thinned in 2010 according to four treatments: CT, DL1, DL2, and un-thinned control. Picea glauca seedlings were underplanted in gaps created by the DL1 and DL2 treatments. Stand productivity per hectare, stem quality (diameter and height, volume stem⁻¹) and survival and height growth of fill-planted P. glauca seedlings were measured 8 year post-treatments. Results: Productivity, volume, diameter, and height were better in the treated stands (CT, DL1, and DL2) than in the un-thinned control. Productivity of CT and DL1 stands was similar 4.8 m³ ha⁻¹ year⁻¹. At the tree level, diameter and height of the trees in the DL1 treatment were 5% greater than those in the CT treatment. The average volume of trees in the DL1 treatment was 11% higher than the CT treatment. Survival after 8 years of fill planted P. glauca seedlings was 2% greater in the DL1 than in the DL2 treatment. DL1 treatment also produced taller seedlings (+20 cm). Discussion: Results showed that DL treatments were effective in producing post-thinned stands with larger stems without affecting stand productivity. In addition, we showed that these treatments were suitable to introduce slower growing conifer seedlings such as Picea glauca in order to re-create or maintain mixed stands despite the aggressive nature of P. tremuloides sucker regeneration.

Keywords: Aspen, inverted diameter-limit, mixed forest, populus tremuloides, silviculture, thinning

Procedia PDF Downloads 140
10110 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 444
10109 Role of Information and Communication Technology in Pharmaceutical Innovation: Case of Firms in Developing Countries

Authors: Ilham Benali, Nasser Hajji, Nawfel Acha

Abstract:

The pharmaceutical sector is ongoing different constraints related to the Research and Development (R&D) costs, the patents extinction, the demand pressing, the regulatory requirement and the generics development, which drive leading firms in the sector to undergo technological change and to shift to biotechnological paradigm. Based on a large literature review, we present a background of innovation trajectory in pharmaceutical industry and reasons behind this technological transformation. Then we investigate the role that Information and Communication Technology (ICT) is playing in this revolution. In order to situate pharmaceutical firms in developing countries in this trajectory, and to examine the degree of their involvement in the innovation process, we did not find any previous empirical work or sources generating gathered data that allow us to analyze this phenomenon. Therefore, and for the case of Morocco, we tried to do it from scratch by gathering relevant data of the last five years from different sources. As a result, only about 4% of all innovative drugs that have access to the local market in the mentioned period are made locally which substantiates that the industrial model in pharmaceutical sector in developing countries is based on the 'license model'. Finally, we present another alternative, based on ICT use and big data tools that can allow developing countries to shift from status of simple consumers to active actors in the innovation process.

Keywords: biotechnologies, developing countries, innovation, information and communication technology, pharmaceutical firms

Procedia PDF Downloads 150
10108 The Effect of Artificial Intelligence on Communication and Information Systems

Authors: Sameh Ibrahim Ghali Hanna

Abstract:

Information system (IS) are fairly crucial in the operation of private and public establishments in growing and developed international locations. Growing countries are saddled with many project failures throughout the implementation of records systems. However, successful information systems are greatly wished for in developing nations in an effort to decorate their economies. This paper is extraordinarily critical in view of the high failure fee of data structures in growing nations, which desire to be decreased to minimal proper levels by means of advocated interventions. This paper centers on a review of IS development in developing international locations. The paper gives evidence of the IS successes and screw-ups in developing nations and posits a version to deal with the IS failures. The proposed model can then be utilized by means of growing nations to lessen their IS mission implementation failure fee. A contrast is drawn between IS improvement in growing international locations and evolved international locations. The paper affords valuable records to assist in decreasing IS failure, and growing IS models and theories on IS development for developing countries.

Keywords: research information systems (RIS), research information, heterogeneous sources, data quality, data cleansing, science system, standardization artificial intelligence, AI, enterprise information system, EIS, integration developing countries, information systems, IS development, information systems failure, information systems success, information systems success model

Procedia PDF Downloads 19
10107 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 63
10106 Adsorption Performance of Hydroxyapatite Powder in the Removal of Dyes in Wastewater

Authors: Aderonke A. Okoya, Oluwaseun A. Somoye, Omotayo S. Amuda, Ifeanyi E. Ofoezie

Abstract:

This study assessed the efficiency of Hydroxyapatite Powder (HAP) in the removal of dyes in wastewater in comparison with Commercial Activated Carbon (CAC). This was with a view to developing cost effective method that could be more environment friendly. The HAP and CAC were used as adsorbent while Indigo dye was used as the adsorbate. The batch adsorption experiment was carried out by varying initial concentrations of the indigo dye, contact time and adsorbent dosage. Adsorption efficiency was classified by adsorption Isotherms using Langmuir, Freundlich and D-R isotherm models. Physicochemical parameters of a textile industry wastewater were determined before and after treatment with the adsorbents. The results from the batch experiments showed that at initial concentration of 125 mg/L of adsorbate in simulated wastewater, 0.9276 ± 0.004618 mg/g and 3.121 ± 0.006928 mg/g of indigo adsorbed per unit time (qt) of HAP and CAC respectively. The ratio of HAP to CAC required for the removal of indigo dye in simulated wastewater was 2:1. The isotherm model of the simulated wastewater fitted well to Freundlich model, the adsorption intensity (1/n) presented 1.399 and 0.564 for HAP and CAC, respectively. This revealed that the HAP had weaker bond than the electrostatic interactions which were present in CAC. The values of some physicochemical parameters (acidity, COD, Cr, Cd) of textile wastewater when treated with HAP decreased. The study concluded that HAP, an environment-friendly adsorbent, could be effectively used to remove dye from textile industrial wastewater with added advantage of being regenerated.

Keywords: adsorption isotherm, commercial activated carbon, hydroxyapatite powder, indigo dye, textile wastewater

Procedia PDF Downloads 240
10105 Temperature-Stable High-Speed Vertical-Cavity Surface-Emitting Lasers with Strong Carrier Confinement

Authors: Yun Sun, Meng Xun, Jingtao Zhou, Ming Li, Qiang Kan, Zhi Jin, Xinyu Liu, Dexin Wu

Abstract:

Higher speed short-wavelength vertical-cavity surface-emitting lasers (VCSELs) working at high temperature are required for future optical interconnects. In this work, the high-speed 850 nm VCSELs are designed, fabricated and characterized. The temperature dependent static and dynamic performance of devices are investigated by using current-power-voltage and small signal modulation measurements. Temperature-stable high-speed properties are obtained by employing highly strained multiple quantum wells and short cavity length of half wavelength. The temperature dependent photon lifetimes and carrier radiative times are determined from damping factor and resonance frequency obtained by fitting the intrinsic optical bandwidth with the two-pole transfer function. In addition, an analytical theoretical model including the strain effect is development based on model-solid theory. The calculation results indicate that the better high temperature performance of VCSELs can be attributed to the strong confinement of holes in the quantum wells leading to enhancement of the carrier transit time.

Keywords: vertical cavity surface emitting lasers, high speed modulation, optical interconnects, semiconductor lasers

Procedia PDF Downloads 125
10104 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 368
10103 Effect of the Drawbar Force on the Dynamic Characteristics of a Spindle-Tool Holder System

Authors: Jui-Pui Hung, Yu-Sheng Lai, Tzuo-Liang Luo, Kung-Da Wu, Yun-Ji Zhan

Abstract:

This study presented the investigation of the influence of the tool holder interface stiffness on the dynamic characteristics of a spindle tool system. The interface stiffness was produced by drawbar force on the tool holder, which tends to affect the spindle dynamics. In order to assess the influence of interface stiffness on the vibration characteristic of spindle unit, we first created a three dimensional finite element model of a high speed spindle system integrated with tool holder. The key point for the creation of FEM model is the modeling of the rolling interface within the angular contact bearings and the tool holder interface. The former can be simulated by a introducing a series of spring elements between inner and outer rings. The contact stiffness was calculated according to Hertz contact theory and the preload applied on the bearings. The interface stiffness of the tool holder was identified through the experimental measurement and finite element modal analysis. Current results show that the dynamic stiffness was greatly influenced by the tool holder system. In addition, variations of modal damping, static stiffness and dynamic stiffness of the spindle tool system were greatly determined by the interface stiffness of the tool holder which was in turn dependent on the draw bar force applied on the tool holder. Overall, this study demonstrates that identification of the interface characteristics of spindle tool holder is of very importance for the refinement of the spindle tooling system to achieve the optimum machining performance.

Keywords: dynamic stiffness, spindle-tool holder, interface stiffness, drawbar force

Procedia PDF Downloads 396
10102 Assessment of Incomplete Childhood Immunization Determinants in Ethiopia: A Nationwide Multilevel Study

Authors: Mastewal Endeshaw Getnet

Abstract:

Imunization is one of the most cost-effective and extensively adopted public health strategies for preventing child disability and mortality. Expanded Program on Immunization (EPI) was launched in 1974 with the goal of providing life-saving vaccines to all children in all and building on the success of the global smallpox eradication program. According to World Health Organization report, by 2020, all countries should have achieved 90% vaccination coverage. Many developing countries still have not achieved the goal. Ethiopia is one of Africa's developing countries. The Ethiopian Ministry of health (MoH) launched the EPI program in 1980, with the goal of achieving 90% coverage among children under the age of 1 year by 1990. Among children aged 12-23 months, complete immunization coverage was 47% based on the Ethiopian Demographic and Health Survey (EDAS) 2019 report. The coverage varies depending on the administrative region, ranging from 21% in Afar region to 89% in Amhara region, Ethiopia. Therefore, identifying risk factors for incomplete immunization among children is a key challenge, particularly in Ethiopia, which has a large geographical diversity and a predicted with 119.96 million projected population size in the year 2022. Despite its critical and challenging issue, this issue is still open and has not yet been fully investigated. Recently, a few previous studies have been conducted on the assessment of incomplete children immunization determinants. However, the majority of the studies were cross-sectional surveys that assessed only EPI coverage. Motivated by the above investigation, this study focuses on investigating determinants associated with incomplete immunization among Ethiopian children to facilitate the rate of full immunization coverage. Moreover, we consider both individual immunization and service performance-related factors to investigate incomplete children's determinants. Consequently, we adopted an ecological model in this study. Individual and environmental factors are combined in the Ecological model, which provides multilevel framework for exploring different determinants related with health behaviors. The Ethiopian Demographic and Health Survey will be used as a source of data from 2021 to achieve the objective of this study. The findings of this study will be useful to the Ethiopian government and other public health institutes to improve the coverage score of childhood immunization based on the identified risk determinants.

Keywords: incomplete immunization, children, ethiopia, ecological model

Procedia PDF Downloads 38