Search results for: classical frenectomy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 988

Search results for: classical frenectomy

148 Comparative Analysis of the Impact of Urbanization on Land Surface Temperature in the United Arab Emirates

Authors: A. O. Abulibdeh

Abstract:

The aim of this study is to investigate and compare the changes in the Land Surface Temperature (LST) as a function of urbanization, particularly land use/land cover changes, in three cities in the UAE, mainly Abu Dhabi, Dubai, and Al Ain. The scale of this assessment will be at the macro- and micro-levels. At the macro-level, a comparative assessment will take place to compare between the four cities in the UAE. At the micro-level, the study will compare between the effects of different land use/land cover on the LST. This will provide a clear and quantitative city-specific information related to the relationship between urbanization and local spatial intra-urban LST variation in three cities in the UAE. The main objectives of this study are 1) to investigate the development of LST on the macro- and micro-level between and in three cities in the UAE over two decades time period, 2) to examine the impact of different types of land use/land cover on the spatial distribution of LST. Because these three cities are facing harsh arid climate, it is hypothesized that (1) urbanization is affecting and connected to the spatial changes in LST; (2) different land use/land cover have different impact on the LST; and (3) changes in spatial configuration of land use and vegetation concentration over time would control urban microclimate on a city scale and control macroclimate on the country scale. This study will be carried out over a 20-year period (1996-2016) and throughout the whole year. The study will compare between two distinct periods with different thermal characteristics which are the cool/cold period from November to March and warm/hot period between April and October. The best practice research method for this topic is to use remote sensing data to target different aspects of natural and anthropogenic systems impacts. The project will follow classical remote sensing and mapping techniques to investigate the impact of urbanization, mainly changes in land use/land cover, on LST. The investigation in this study will be performed in two stages. Stage one remote sensing data will be used to investigate the impact of urbanization on LST on a macroclimate level where the LST and Urban Heat Island (UHI) will be compared in the three cities using data from the past two decades. Stage two will investigate the impact on microclimate scale by investigating the LST and UHI using a particular land use/land cover type. In both stages, an LST and urban land cover maps will be generated over the study area. The outcome of this study should represent an important contribution to recent urban climate studies, particularly in the UAE. Based on the aim and objectives of this study, the expected outcomes are as follow: i) to determine the increase or decrease of LST as a result of urbanization in these four cities, ii) to determine the effect of different land uses/land covers on increasing or decreasing the LST.

Keywords: land use/land cover, global warming, land surface temperature, remote sensing

Procedia PDF Downloads 248
147 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India

Authors: Bhaskar Basu

Abstract:

Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among  a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways  in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets.  This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.

Keywords: business school, India, learning, social media, social networking, university

Procedia PDF Downloads 264
146 Soil Quality Response to Long-Term Intensive Resources Management and Soil Texture

Authors: Dalia Feiziene, Virginijus Feiza, Agne Putramentaite, Jonas Volungevicius, Kristina Amaleviciute, Sarunas Antanaitis

Abstract:

The investigations on soil conservation are one of the most important topics in modern agronomy. Soil management practices have great influence on soil physico-chemical quality and GHG emission. Research objective: To reveal the sensitivity and vitality of soils with different texture to long-term antropogenisation on Cambisol in Central Lithuania and to compare them with not antropogenised soil resources. Methods: Two long-term field experiments (loam on loam; sandy loam on loam) with different management intensity were estimated. Disturbed and undisturbed soil samples were collected from 5-10, 15-20 and 30-35 cm depths. Soil available P and K contents were determined by ammonium lactate extraction, total N by the dry combustion method, SOC content by Tyurin titrimetric (classical) method, texture by pipette method. In undisturbed core samples soil pore volume distribution, plant available water (PAW) content were determined. A closed chamber method was applied to quantify soil respiration (SR). Results: Long-term resources management changed soil quality. In soil with loam texture, within 0-10, 10-20 and 30-35 cm soil layers, significantly higher PAW, SOC and mesoporosity (MsP) were under no-tillage (NT) than under conventional tillage (CT). However, total porosity (TP) under NT was significantly higher only in 0-10 cm layer. MsP acted as dominant factor for N, P and K accumulation in adequate layers. P content in all soil layers was higher under NT than in CT. N and K contents were significantly higher than under CT only in 0-10 cm layer. In soil with sandy loam texture, significant increase in SOC, PAW, MsP, N, P and K under NT was only in 0-10 cm layer. TP under NT was significantly lower in all layers. PAW acted as strong dominant factor for N, P, K accumulation. The higher PAW the higher NPK contents were determined. NT did not secure chemical quality within deeper layers than CT. Long-term application of mineral fertilisers significantly increased SOC and soil NPK contents primarily in top-soil. Enlarged fertilization determined the significantly higher leaching of nutrients to deeper soil layers (CT) and increased hazards of top-soil pollution. Straw returning significantly increased SOC and NPK accumulation in top-soil. The SR on sandy loam was significantly higher than on loam. At dry weather conditions, on loam SR was higher in NT than in CT, on sandy loam SR was higher in CT than in NT. NPK fertilizers promoted significantly higher SR in both dry and wet year, but suppressed SR on sandy loam during usual year. Not antropogenised soil had similar SOC and NPK distribution within 0-35 cm layer and depended on genesis of soil profile horizons.

Keywords: fertilizers, long-term experiments, soil texture, soil tillage, straw

Procedia PDF Downloads 299
145 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework

Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.

Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles

Procedia PDF Downloads 15
144 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System

Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb

Abstract:

The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.

Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model

Procedia PDF Downloads 63
143 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 117
142 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 162
141 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer

Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu

Abstract:

The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.

Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium

Procedia PDF Downloads 80
140 Revisiting Historical Illustrations in the Age of Digital Anatomy Education

Authors: Julia Wimmers-Klick

Abstract:

In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.

Keywords: human anatomy, historical illustrations, historical context, medical education

Procedia PDF Downloads 21
139 Study of Bis(Trifluoromethylsulfonyl)Imide Based Ionic Liquids by Gas Chromatography

Authors: F. Mutelet, L. Cesari

Abstract:

Development of safer and environmentally friendly processes and products is needed to achieve sustainable production and consumption patterns. Ionic liquids, which are of great interest to the chemical and related industries because of their attractive properties as solvents, should be considered. Ionic liquids are comprised of an asymmetric, bulky organic cation and a weakly coordinating organic or inorganic anion. A large number of possible combinations allows for the ability to ‘fine tune’ the solvent properties for a specific purpose. Physical and chemical properties of ionic liquids are not only influenced by the nature of the cation and the nature of cation substituents but also by the polarity and the size of the anion. These features infer to ionic liquids numerous applications, in organic synthesis, separation processes, and electrochemistry. Separation processes required a good knowledge of the behavior of organic compounds with ionic liquids. Gas chromatography is a useful tool to estimate the interactions between organic compounds and ionic liquids. Indeed, retention data may be used to determine infinite dilution thermodynamic properties of volatile organic compounds in ionic liquids. Among others, the activity coefficient at infinite dilution is a direct measure of solute-ionic liquid interaction. In this work, infinite dilution thermodynamic properties of volatile organic compounds in specific bis(trifluoromethylsulfonyl)imide based ionic liquids measured by gas chromatography is presented. It was found that apolar compounds are not miscible in this family of ionic liquids. As expected, the solubility of organic compounds is related to their polarity and hydrogen-bond. Through activity coefficients data, the performance of these ionic liquids was evaluated for different separation processes (benzene/heptane, thiophene/heptane and pyridine/heptane). Results indicate that ionic liquids may be used for the extraction of polar compounds (aromatics, alcohols, pyridine, thiophene, tetrahydrofuran) from aliphatic media. For example, 1-benzylpyridinium bis(trifluoromethylsulfonyl) imide and 1-cyclohexylmethyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide are more efficient for the extraction of aromatics or pyridine from aliphatics than classical solvents. Ionic liquids with long alkyl chain length present important capacity values but their selectivity values are low. In conclusion, we have demonstrated that specific bis(trifluoromethylsulfonyl)imide based ILs containing polar chain grafted on the cation (for example benzyl or cyclohexyl) increases considerably their performance in separation processes.

Keywords: interaction organic solvent-ionic liquid, gas chromatography, solvation model, COSMO-RS

Procedia PDF Downloads 109
138 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 139
137 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 175
136 Relocating Migration for Higher Education: Analytical Account of Students' Perspective

Authors: Sumit Kumar

Abstract:

The present study aims to identify the factors responsible for the internal migration of students other than push & pull factors; associated with the source region and destination region, respectively, as classified in classical geography. But in this classification of factors responsible for the migration of students, an agency of individual and the family he/she belongs to, have not been recognized which has later become the centre of the argument for describing and analyzing migration in New Economic theory of migration and New Economics of labour migration respectively. In this backdrop, the present study aims to understand the agency of an individual and the family members regarding one’s migration for higher education. Therefore, this study draws upon New Economic theory of migration and New Economics of labour migration for identifying the agency of individual or family in the context of migration. Further, migration for higher education consists not only the decision to migrate but also where to migrate (location), which university, which college and which course to pursue, also. In order to understand the role of various individuals at various stage of student migration, present study seeks help from the social networking approach for migration which identifies the individuals who facilitate the process of migration by reducing negative externalities of migration through sharing information and various other sorts of help to the migrant. Furthermore, this study also aims to rank those individuals who have helped migrants at various stages of migration for higher education in taking a decision, along with the factors responsible for their migration on the basis of their perception. In order to fulfill the above mentioned objectives of this study, quantification of qualitative data (perception of respondents) has been done employing through frequency distribution analysis. Qualitative data has been collected at two levels but questionnaire survey was the tool for data collection at both the occasions. Twenty five students who have migrated to other state for the purpose of higher education have been approached for pre-questionnaire survey consisting open-ended questions while one hundred students belonging to the same clientele have been approached for questionnaire survey consisting close-ended questions. This study has identified social pressure, peer group pressure and parental pressure; variables not constituting push & pull factors, very important for students’ migration. They have been even assigned better ranked by the respondents than push factors. Further, self (migrant themselves) have been ranked followed by parents by the respondents when it comes to take various decisions attached with the process of migration. Therefore, it can be said without sounding cynical that there are other factors other than push & pull factors which do facilitate the process of migration for higher education not only at the level to migrate but also at other levels intrinsic to the process of migration for higher education.

Keywords: agency, migration for higher education, perception, push and pull factors

Procedia PDF Downloads 242
135 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 176
134 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
133 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach

Authors: Elena Pummer, Holger Schuettrumpf

Abstract:

By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.

Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach

Procedia PDF Downloads 213
132 The Link between Anthropometry and Fat-Based Obesity Indices in Pediatric Morbid Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Anthropometric measurements are essential for obesity studies. Waist circumference (WC) is the most frequently used measure, and along with hip circumference (HC), it is used in most equations derived for the evaluation of obese individuals. Morbid obesity is the most severe clinical form of obesity, and such individuals may also exhibit some clinical findings leading to metabolic syndrome (MetS). Then, it becomes a requirement to discriminate morbid obese children with (MOMetS+) and without (MOMetS-) MetS. Almost all obesity indices can differentiate obese (OB) children from children with normal body mass index (N-BMI). However, not all of them are capable of making this distinction. A recently introduced anthropometric obesity index, waist circumference + hip circumference/2 ((WC+HC)/2), was confirmed to differ OB children from those with N-BMI, however it has not been tested whether it will find clinical usage for the differential diagnosis of MOMetS+ and MOMetS-. This study was designed to find out the availability of (WC+HC)/2 for the purpose and to compare the possible preponderance of it over some other anthropometric or fat-based obesity indices. Forty-five MOMetS+ and forty-five MOMetS- children were included in the study. Participants have submitted informed consent forms. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Anthropometric measurements were performed. Body mass index (BMI), waist-to-hip circumference (W/H), (WC+HC)/2, trunk-to-leg fat ratio (TLFR), trunk-to-appendicular fat ratio (TAFR), trunk fat+leg fat/2 ((trunk+leg fat)/2), diagnostic obesity notation model assessment index-2 (D2I) and fat mass index (FMI) were calculated for both groups. Study data was analyzed statistically, and 0.05 for p value was accepted as the statistical significance degree. Statistically higher BMI, WC, (WC+HC)/2, (trunk+leg fat)/2 values were found in MOMetS+ children than MOMetS- children. No statistically significant difference was detected for W/H, TLFR, TAFR, D2I, and FMI between two groups. The lack of difference between the groups in terms of FMI and D2I pointed out the fact that the recently developed fat-based index; (trunk+leg fat)/2 gives much more valuable information during the evaluation of MOMetS+ and MOMetS- children. Upon evaluation of the correlations, (WC+HC)/2 was strongly correlated with D2I and FMI in both MOMetS+ and MOMetS- groups. Neither D2I nor FMI was correlated with W/H. Strong correlations were calculated between (WC+HC)/2 and (trunk+leg fat)/2 in both MOMetS- (r=0.961; p<0.001) and MOMetS+ (r=0.936; p<0.001) groups. Partial correlations between (WC+HC)/2 and (trunk+leg fat)/2 after controlling the effect of basal metabolic rate were r=0.726; p<0.001 in MOMetS- group and r=0.932; p<0.001 in MOMetS+ group. The correlation in the latter group was higher than the first group. In conclusion, recently developed anthropometric obesity index (WC+HC)/2 and fat-based obesity index (trunk+leg fat)/2 were of preponderance over the previously introduced classical obesity indices such as W/H, D2I and FMI during the differential diagnosis of MOMetS+ and MOMetS- children.

Keywords: children, hip circumference, metabolic syndrome, morbid obesity, waist circumference

Procedia PDF Downloads 289
131 CO₂ Conversion by Low-Temperature Fischer-Tropsch

Authors: Pauline Bredy, Yves Schuurman, David Farrusseng

Abstract:

To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.

Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process

Procedia PDF Downloads 57
130 Revolutions and Cyclic Patterns in Chinese Town Planning: The Case-Study of Shenzhen

Authors: Domenica Bona

Abstract:

Colin Chant and David Goodman argue that historians of Chinese pre-industrial cities tend to underestimate revolutions and overestimate cyclic patterns: periods of peace and prosperity in the earl part of each d nast , followed b peasants’ rebellions and upheavals. Boyd described these cyclic patterns as part of the background of Chinese town planning and architecture. Thus old ideals of city planning-square plan, southward orientation and a palace along the central axis - are revived again and again in the ascendant phases of several d nastic c cles (e.g. Chang’an, Kaifen, and Beijing). Along this line of thought, m paper questions the relationship between the “magic square rule” and modern Chinese urban- planning. As a matter of fact, the classical theme of “cosmic Taoist urbanism” is still a reference for planning cities and new urban developments, whenever there is the intention to express nationalist ideals and “cultural straightforwardness.” Besides, some case studies can be related to “modern d nasties”: the first Republic under the Kuo Min Tang, the red People’s Republic and the post-Maoist open country of Deng Xiao Ping. Considering the project for the new capital of Nanjing in the Thirties, Beijing’s Tianan Men area in the ifties, and Shenzhen’s utian CBD in late 20th century, I argue that cyclic patterns are still in place, though with deformations related to westernization, private interests and lack of spirituality. How far new Chinese cities are - or simply seem to be - westernized? Symbolism, invisible frameworks, repeating features and behavioural patterns make urban China just “superficiall” western. This can be well noticed in cities previousl occupied b foreigners, like Hong Kong, or in newly founded ones, like Shenzhen, where both Asians and non-Asian people can feel the gender-shift from New-York-like landscapes to something else. Current planning in main metropolitan areas shows a blurred relationship between public policies and private investments: two levels of decisions and actions, one addressing the larger scale and infrastructures, the other concerning the micro scale and development of single plots. While zoning is instrumental in this process, master plans are often laid out over a very poor cartography, so much that any relation between the formal characters of new cities and the centuries-old structure of the related territory gets lost.

Keywords: China, contemporary cities, cultural heritage, shenzhen, urban planning

Procedia PDF Downloads 361
129 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 65
128 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 264
127 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 253
126 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
125 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 121
124 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis

Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin

Abstract:

With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.

Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism

Procedia PDF Downloads 266
123 The Role of Rapid Maxillary Expansion in Managing Obstructive Sleep Apnea in Children: A Literature Review

Authors: Suleman Maliha, Suleman Sidra

Abstract:

Obstructive sleep apnea (OSA) is a sleep disorder that can result in behavioral and psychomotor impairments in children. The classical treatment modalities for OSA have been continuous positive airway pressure and adenotonsillectomy. However, orthodontic intervention through rapid maxillary expansion (RME) has also been commonly used to manage skeletal transverse maxillary discrepancies. Aim and objectives: The aim of this study is to determine the efficacy of rapid maxillary expansion in paediatric patients with obstructive sleep apnea by assessing pre and post-treatment mean apnea-hypopnea index (AHI) and oxygen saturations. Methodology: Literature was identified through a rigorous search of the Embase, Pubmed, and CINAHL databases. Articles published from 2012 onwards were selected. The inclusion criteria consisted of patients aged 18 years and under with no systemic disease, adenotonsillar surgery, or hypertrophy who are undergoing RME with AHI measurements before and after treatment. In total, six suitable papers were identified. Results: Three studies assessed patients pre and post-RME at 12 months. The first study consisted of 15 patients with an average age of 7.5 years. Following treatment, they found that RME resulted in both higher oxygen saturations (+ 5.3%) and improved AHI (- 4.2 events). The second study assessed 11 patients aged 5–8 years and also noted improvements, with mean AHI reduction from 6.1 to 2.4 and oxygen saturations increasing from 93.1% to 96.8%. The third study reviewed 14 patients aged 6–9 years and similarly found an AHI reduction from 5.7 to 4.4 and an oxygen saturation increase from 89.8% to 95.5%. All modifications noted in these studies were statistically significant. A long-term study reviewed 23 patients aged 6–12 years post-RME treatment on an annual basis for 12 years. They found that the mean AHI reduced from 12.2 to 0.4, with improved oxygen saturations from 78.9% to 95.1%. Another study assessed 19 patients aged 9-12 years at two months into RME and four months post-treatment. Improvements were also noted at both stages, with an overall reduction of the mean AHI from 16.3 to 0.8 and an overall increase in oxygen saturations from 77.9% to 95.4%. The final study assessed 26 children aged 7-11 years on completion of individual treatment and found an AHI reduction from 6.9 to 5.3. However, the oxygen saturation remained stagnant at 96.0%, but this was not clinically significant. Conclusion: Overall, the current evidence suggests that RME is a promising treatment option for paediatric patients with OSA. It can provide efficient and conservative treatment; however, early diagnosis is crucial. As there are various factors that could be contributing to OSA, it is important that each case is treated on its individual merits. Going forward, there is a need for more randomized control trials with larger cohorts being studied. Research into the long-term effects of RME and potential relapse amongst cases would also be useful.

Keywords: orthodontics, sleep apnea, maxillary expansion, review

Procedia PDF Downloads 82
122 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach

Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes

Abstract:

Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.

Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux

Procedia PDF Downloads 166
121 Photovoltaic-Driven Thermochemical Storage for Cooling Applications to Be Integrated in Polynesian Microgrids: Concept and Efficiency Study

Authors: Franco Ferrucci, Driss Stitou, Pascal Ortega, Franck Lucas

Abstract:

The energy situation in tropical insular regions, as found in the French Polynesian islands, presents a number of challenges, such as high dependence on imported fuel, high transport costs from the mainland and weak electricity grids. Alternatively, these regions have a variety of renewable energy resources, which favor the exploitation of smart microgrids and energy storage technologies. With regards to the electrical energy demand, the high temperatures in these regions during the entire year implies that a large proportion of consumption is used for cooling buildings, even during the evening hours. In this context, this paper presents an air conditioning system driven by photovoltaic (PV) electricity that combines a refrigeration system and a thermochemical storage process. Thermochemical processes are able to store energy in the form of chemical potential with virtually no losses, and this energy can be used to produce cooling during the evening hours without the need to run a compressor (thus no electricity is required). Such storage processes implement thermochemical reactors in which a reversible chemical reaction between a solid compound and a gas takes place. The solid/gas pair used in this study is BaCl2 reacting with ammonia (NH3), which is also the coolant fluid in the refrigeration circuit. In the proposed system, the PV-driven electric compressor is used during the daytime either to run the refrigeration circuit when a cooling demand occurs or to decompose the ammonia-charged salt and remove the gas from thermochemical reactor when no cooling is needed. During the evening, when there is no electricity from solar source, the system changes its configuration and the reactor reabsorbs the ammonia gas from the evaporator and produces the cooling effect. In comparison to classical PV-driven air conditioning units equipped with electrochemical batteries (e.g. Pb, Li-ion), the proposed system has the advantage of having a novel storage technology with a much longer charge/discharge life cycle, and no self-discharge. It also allows a continuous operation of the electric compressor during the daytime, thus avoiding the problems associated with the on-off cycling. This work focuses on the system concept and on the efficiency study of its main components. It also compares the thermochemical with electrochemical storage as well as with other forms of thermal storage, such as latent (ice) and sensible heat (chilled water). The preliminary results show that the system seems to be a promising alternative to simultaneously fulfill cooling and energy storage needs in tropical insular regions.

Keywords: microgrid, solar air-conditioning, solid/gas sorption, thermochemical storage, tropical and insular regions

Procedia PDF Downloads 241
120 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 202
119 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 104