Search results for: classical logic
210 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 88209 Economic Policy to Promote small and Medium-sized Enterprises in Georgia in the Post-Pandemic Period
Authors: Gulnaz Erkomaishvili
Abstract:
Introduction: The paper assesses the impact of the COVID-19 pandemic on the activities of small and medium-sized enterprises in Georgia, identifies their problems, and analyzes the state economic policy measures. During the pandemic, entrepreneurs named the imposition of restrictions, access to financial resources, shortage of qualified personnel, high tax rates, unhealthy competition in the market, etc. as the main challenges. The Georgian government has had to take special measures to mitigate the crisis impact caused by the pandemic. For example - in 2020, they mobilized more than 1,6 billion Gel for various eventsto support entrepreneurs. Small and medium-sized entrepreneurship development strategy is presented based on the research; Corresponding conclusions are made, and recommendations are developed. Objectives: The object of research is small and medium-sized enterprises and economic-political decisions aimed at their promotion.Methodology: This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. In-depth interviews with experts were conducted to determine quantitative and qualitative indicators; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions: The COVID-19pandemic has had a significant impact on small and medium-sized enterprises. For them, Lockdown is a major challenge. Total sales volume decreased. At the same time, the innovative capabilities of enterprises and the volume of sales in remote channels have increased. As for the assessment of state support measures by small and medium-sizedentrepreneurs, despite the existence of support programs, a large number of entrepreneurs still do not evaluate the measures taken by the state positively. Among the desirable measures to be taken by the state, which would improve the activities of small and medium-sized entrepreneurs, who negatively or largely negatively assessed the activity of the state, named: tax incentives/exemption from certain taxes at the initial stage; Need for periodic trainings/organization of digital technologies, marketing training courses to improve the qualification of employees; Logic and adequacy of criteria when awarding grants and funding; Facilitating the finding of investors; Less bureaucracy, etc.Keywords: small and medium enterprises, small and medium entrepreneurship, economic policy for small and medium entrepreneurship development, government regulations in Georgia, COVID-19 pandemic
Procedia PDF Downloads 155208 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework
Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles
Procedia PDF Downloads 15207 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 63206 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 283205 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117204 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 162203 The Incoherence of the Philosophers as a Defense of Philosophy against Theology
Authors: Edward R. Moad
Abstract:
Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.Keywords: philosophy, incoherence, theology, Tahafut
Procedia PDF Downloads 161202 Analytical and Numerical Studies on the Behavior of a Freezing Soil Layer
Authors: X. Li, Y. Liu, H. Wong, B. Pardoen, A. Fabbri, F. McGregor, E. Liu
Abstract:
The target of this paper is to investigate how saturated poroelastic soils subject to freezing temperatures behave and how different boundary conditions can intervene and affect the thermo-hydro-mechanical (THM) responses, based on a particular but classical configuration of a finite homogeneous soil layer studied by Terzaghi. The essential relations on the constitutive behavior of a freezing soil are firstly recalled: ice crystal - liquid water thermodynamic equilibrium, hydromechanical constitutive equations, momentum balance, water mass balance, and the thermal diffusion equation, in general, non-linear case where material parameters are state-dependent. The system of equations is firstly linearized, assuming all material parameters to be constants, particularly the permeability of liquid water, which should depend on the ice content. Two analytical solutions solved by the classic Laplace transform are then developed, accounting for two different sets of boundary conditions. Afterward, the general non-linear equations with state-dependent parameters are solved using a commercial code COMSOL based on finite elements method to obtain numerical results. The validity of this numerical modeling is partially verified using the analytical solution in the limiting case of state-independent parameters. Comparison between the results given by the linearized analytical solutions and the non-linear numerical model reveals that the above-mentioned linear computation will always underestimate the liquid pore pressure and displacement, whatever the hydraulic boundary conditions are. In the nonlinear model, the faster growth of ice crystals, accompanying the subsequent reduction of permeability of freezing soil layer, makes a longer duration for the depressurization of water liquid and slower settlement in the case where the ground surface is swiftly covered by a thin layer of ice, as well as a bigger global liquid pressure and swelling in the case of the impermeable ground surface. Nonetheless, the analytical solutions based on linearized equations give a correct order-of-magnitude estimate, especially at moderate temperature variations, and remain a useful tool for preliminary design checks.Keywords: chemical potential, cryosuction, Laplace transform, multiphysics coupling, phase transformation, thermodynamic equilibrium
Procedia PDF Downloads 80201 Revisiting Historical Illustrations in the Age of Digital Anatomy Education
Authors: Julia Wimmers-Klick
Abstract:
In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.Keywords: human anatomy, historical illustrations, historical context, medical education
Procedia PDF Downloads 21200 Study of Bis(Trifluoromethylsulfonyl)Imide Based Ionic Liquids by Gas Chromatography
Authors: F. Mutelet, L. Cesari
Abstract:
Development of safer and environmentally friendly processes and products is needed to achieve sustainable production and consumption patterns. Ionic liquids, which are of great interest to the chemical and related industries because of their attractive properties as solvents, should be considered. Ionic liquids are comprised of an asymmetric, bulky organic cation and a weakly coordinating organic or inorganic anion. A large number of possible combinations allows for the ability to ‘fine tune’ the solvent properties for a specific purpose. Physical and chemical properties of ionic liquids are not only influenced by the nature of the cation and the nature of cation substituents but also by the polarity and the size of the anion. These features infer to ionic liquids numerous applications, in organic synthesis, separation processes, and electrochemistry. Separation processes required a good knowledge of the behavior of organic compounds with ionic liquids. Gas chromatography is a useful tool to estimate the interactions between organic compounds and ionic liquids. Indeed, retention data may be used to determine infinite dilution thermodynamic properties of volatile organic compounds in ionic liquids. Among others, the activity coefficient at infinite dilution is a direct measure of solute-ionic liquid interaction. In this work, infinite dilution thermodynamic properties of volatile organic compounds in specific bis(trifluoromethylsulfonyl)imide based ionic liquids measured by gas chromatography is presented. It was found that apolar compounds are not miscible in this family of ionic liquids. As expected, the solubility of organic compounds is related to their polarity and hydrogen-bond. Through activity coefficients data, the performance of these ionic liquids was evaluated for different separation processes (benzene/heptane, thiophene/heptane and pyridine/heptane). Results indicate that ionic liquids may be used for the extraction of polar compounds (aromatics, alcohols, pyridine, thiophene, tetrahydrofuran) from aliphatic media. For example, 1-benzylpyridinium bis(trifluoromethylsulfonyl) imide and 1-cyclohexylmethyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide are more efficient for the extraction of aromatics or pyridine from aliphatics than classical solvents. Ionic liquids with long alkyl chain length present important capacity values but their selectivity values are low. In conclusion, we have demonstrated that specific bis(trifluoromethylsulfonyl)imide based ILs containing polar chain grafted on the cation (for example benzyl or cyclohexyl) increases considerably their performance in separation processes.Keywords: interaction organic solvent-ionic liquid, gas chromatography, solvation model, COSMO-RS
Procedia PDF Downloads 109199 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 139198 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 149197 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe
Authors: Elsadig Naseraddeen Ahmed Mohamed
Abstract:
In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon
Procedia PDF Downloads 175196 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 208195 Relocating Migration for Higher Education: Analytical Account of Students' Perspective
Authors: Sumit Kumar
Abstract:
The present study aims to identify the factors responsible for the internal migration of students other than push & pull factors; associated with the source region and destination region, respectively, as classified in classical geography. But in this classification of factors responsible for the migration of students, an agency of individual and the family he/she belongs to, have not been recognized which has later become the centre of the argument for describing and analyzing migration in New Economic theory of migration and New Economics of labour migration respectively. In this backdrop, the present study aims to understand the agency of an individual and the family members regarding one’s migration for higher education. Therefore, this study draws upon New Economic theory of migration and New Economics of labour migration for identifying the agency of individual or family in the context of migration. Further, migration for higher education consists not only the decision to migrate but also where to migrate (location), which university, which college and which course to pursue, also. In order to understand the role of various individuals at various stage of student migration, present study seeks help from the social networking approach for migration which identifies the individuals who facilitate the process of migration by reducing negative externalities of migration through sharing information and various other sorts of help to the migrant. Furthermore, this study also aims to rank those individuals who have helped migrants at various stages of migration for higher education in taking a decision, along with the factors responsible for their migration on the basis of their perception. In order to fulfill the above mentioned objectives of this study, quantification of qualitative data (perception of respondents) has been done employing through frequency distribution analysis. Qualitative data has been collected at two levels but questionnaire survey was the tool for data collection at both the occasions. Twenty five students who have migrated to other state for the purpose of higher education have been approached for pre-questionnaire survey consisting open-ended questions while one hundred students belonging to the same clientele have been approached for questionnaire survey consisting close-ended questions. This study has identified social pressure, peer group pressure and parental pressure; variables not constituting push & pull factors, very important for students’ migration. They have been even assigned better ranked by the respondents than push factors. Further, self (migrant themselves) have been ranked followed by parents by the respondents when it comes to take various decisions attached with the process of migration. Therefore, it can be said without sounding cynical that there are other factors other than push & pull factors which do facilitate the process of migration for higher education not only at the level to migrate but also at other levels intrinsic to the process of migration for higher education.Keywords: agency, migration for higher education, perception, push and pull factors
Procedia PDF Downloads 244194 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 176193 Gender Policies and Political Culture: An Examination of the Canadian Context
Authors: Chantal Maille
Abstract:
This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.Keywords: Canada, gender-based analysis, gender policies, political culture
Procedia PDF Downloads 222192 The Return of the Witches: A Class That Motivates the Analysis of Gender Bias in Engineer
Authors: Veronica Botero, Karen Ortiz
Abstract:
The Faculty of Mines, of the National University of Colombia, Medellín Campus, is a faculty that has 136 years of history and represents one of the most important study centers in the country in the field of engineering and scientific research, as well as a reference at a global, national, and Latin American level in this matter. Despite being a faculty with so many years of history and having trained a large number of graduates under the traditional mechanistic and androcentric paradigm, which reproduces the logic of the traditional scientific method and the differentiated and severe look between subject-object of research among other binarisms, has also been the place where professors and students have become aware of the need to transform this paradigm into engineering, and focus on the sustainability of diversity and the well-being of the natural and social systems that inhabit the territories and has opened possibilities for the implementation of classes that address feminist pedagogical theories and practices. The class: The return of the witches, is an initiative that constitutes an important training exercise that provides students with the study of feminisms, the importance of closing gender gaps and critical readings on the traditional paradigm of engineering. The objective of this article is to present a systematization of the experience of design, implementation and development of this elective class, describing the tensions that arose at the time when a subject of this style was created and proposed in the Department of Geosciences and Environment, from the Faculty of Mines in 2022; the reactions of the groups of students who have taken it and their perceptions and opinions about ecofeminism as proposals for critical analysis and practices in relation to the environment and, above all, how their readings of the world have changed after having studied this subject for a semester. The pedagogical journey and the feminist methodologies that have been designed and adjusted over two years of work will be explained based on the sharing of situated knowledge of the students and the two teachers who teach the course, who pose challenges to the dominant ideology in engineering since one of them is trained in human sciences and feminist studies and the other, although trained in civil engineering and geosciences, is a woman with diverse sexual orientation and is the first professor to have assumed the position of dean in the 135 years of history of the Faculty. The transformations in the life experience of the students are revealing since they affirm that the training process is forceful and powerful to outline a much more qualified and critical professional profile that contributes to the transformation of gender gaps in the country. This class is therefore a challenge in this Faculty of Engineering that still presents a dominant ideology on gender that has not been questioned or challenged before.Keywords: feminisms, gender equality, gender bias, engineering for life Manifiesto.
Procedia PDF Downloads 70191 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 99190 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach
Authors: Elena Pummer, Holger Schuettrumpf
Abstract:
By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach
Procedia PDF Downloads 213189 The Link between Anthropometry and Fat-Based Obesity Indices in Pediatric Morbid Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Anthropometric measurements are essential for obesity studies. Waist circumference (WC) is the most frequently used measure, and along with hip circumference (HC), it is used in most equations derived for the evaluation of obese individuals. Morbid obesity is the most severe clinical form of obesity, and such individuals may also exhibit some clinical findings leading to metabolic syndrome (MetS). Then, it becomes a requirement to discriminate morbid obese children with (MOMetS+) and without (MOMetS-) MetS. Almost all obesity indices can differentiate obese (OB) children from children with normal body mass index (N-BMI). However, not all of them are capable of making this distinction. A recently introduced anthropometric obesity index, waist circumference + hip circumference/2 ((WC+HC)/2), was confirmed to differ OB children from those with N-BMI, however it has not been tested whether it will find clinical usage for the differential diagnosis of MOMetS+ and MOMetS-. This study was designed to find out the availability of (WC+HC)/2 for the purpose and to compare the possible preponderance of it over some other anthropometric or fat-based obesity indices. Forty-five MOMetS+ and forty-five MOMetS- children were included in the study. Participants have submitted informed consent forms. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Anthropometric measurements were performed. Body mass index (BMI), waist-to-hip circumference (W/H), (WC+HC)/2, trunk-to-leg fat ratio (TLFR), trunk-to-appendicular fat ratio (TAFR), trunk fat+leg fat/2 ((trunk+leg fat)/2), diagnostic obesity notation model assessment index-2 (D2I) and fat mass index (FMI) were calculated for both groups. Study data was analyzed statistically, and 0.05 for p value was accepted as the statistical significance degree. Statistically higher BMI, WC, (WC+HC)/2, (trunk+leg fat)/2 values were found in MOMetS+ children than MOMetS- children. No statistically significant difference was detected for W/H, TLFR, TAFR, D2I, and FMI between two groups. The lack of difference between the groups in terms of FMI and D2I pointed out the fact that the recently developed fat-based index; (trunk+leg fat)/2 gives much more valuable information during the evaluation of MOMetS+ and MOMetS- children. Upon evaluation of the correlations, (WC+HC)/2 was strongly correlated with D2I and FMI in both MOMetS+ and MOMetS- groups. Neither D2I nor FMI was correlated with W/H. Strong correlations were calculated between (WC+HC)/2 and (trunk+leg fat)/2 in both MOMetS- (r=0.961; p<0.001) and MOMetS+ (r=0.936; p<0.001) groups. Partial correlations between (WC+HC)/2 and (trunk+leg fat)/2 after controlling the effect of basal metabolic rate were r=0.726; p<0.001 in MOMetS- group and r=0.932; p<0.001 in MOMetS+ group. The correlation in the latter group was higher than the first group. In conclusion, recently developed anthropometric obesity index (WC+HC)/2 and fat-based obesity index (trunk+leg fat)/2 were of preponderance over the previously introduced classical obesity indices such as W/H, D2I and FMI during the differential diagnosis of MOMetS+ and MOMetS- children.Keywords: children, hip circumference, metabolic syndrome, morbid obesity, waist circumference
Procedia PDF Downloads 289188 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 58187 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie
Authors: Paulo Da Silva Quadros
Abstract:
This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading
Procedia PDF Downloads 166186 The Socio-Emotional Vulnerability of Professional Rugby Union Athletes
Authors: Hannah Kuhar
Abstract:
This paper delves into the attitudes of professional and semi-professional rugby union athletes in regard to socio-emotional vulnerability, or the willingness to express the full spectrum of human emotion in a social context. Like all humans, athletes of all sports regularly experience feelings of shame, powerlessness, and loneliness, and often feel unable to express such feelings due to factors including lack of situational support, absence of adequate expressive language and lack of resource. To this author’s knowledge, however, no previous research has considered the particular demographic of professional rugby union athletes, despite the sport’s immense popularity and economic contribution to global communities. Hence, this paper aims to extend previous research by exploring the experiences of professional rugby union athletes and their unwillingness and inability to express socio-emotional vulnerability. By having a better understanding of vulnerability in rugby and sports, this paper is able to contribute to the growing field of mental health and wellbeing research, particularly towards the emerging themes of resilience and belonging. Based on qualitative fieldwork conducted over a period of seven months across France and Australia, via the mechanisms of semi-structured interview and observation, this work uses the field theory framework of Pierre Bourdieu to construct an analysis of multidisciplinary thought. Approaching issues of gender, sexuality, physicality, education, and family, this paper shows that socio-emotional vulnerability is experienced by all players regardless of their background, in a variety of ways. Common themes and responses are drawn to show the universality of rugby’s pitfalls, which have previously been limited to specific demographics in isolation of their broader contexts. With the author themselves a semi-professional athlete, the provision of unique ‘insider’ access facilitates a deeper and more comprehensive understanding of first-hand athlete experiences, often unexplored within the context of the academic arena. The primary contention of this paper is to argue that by celebrating socio-emotional vulnerability, there becomes an opportunity to improve on-field team outcomes. Ultimately, players play better when they feel supported by their teammates, and this logic extends to the outcome of the team when socio-emotional team initiatives are widely embraced. The creation of such a culture requires deliberate and purposeful efforts, where player ownership and buy-in are high. Further study in this field may assist teams to better understand the elements which contribute to strong team culture and to strong results on the pitch.Keywords: rugby, vulnerability, athletes, France, Bourdieu
Procedia PDF Downloads 137185 CO₂ Conversion by Low-Temperature Fischer-Tropsch
Authors: Pauline Bredy, Yves Schuurman, David Farrusseng
Abstract:
To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process
Procedia PDF Downloads 58184 Revolutions and Cyclic Patterns in Chinese Town Planning: The Case-Study of Shenzhen
Authors: Domenica Bona
Abstract:
Colin Chant and David Goodman argue that historians of Chinese pre-industrial cities tend to underestimate revolutions and overestimate cyclic patterns: periods of peace and prosperity in the earl part of each d nast , followed b peasants’ rebellions and upheavals. Boyd described these cyclic patterns as part of the background of Chinese town planning and architecture. Thus old ideals of city planning-square plan, southward orientation and a palace along the central axis - are revived again and again in the ascendant phases of several d nastic c cles (e.g. Chang’an, Kaifen, and Beijing). Along this line of thought, m paper questions the relationship between the “magic square rule” and modern Chinese urban- planning. As a matter of fact, the classical theme of “cosmic Taoist urbanism” is still a reference for planning cities and new urban developments, whenever there is the intention to express nationalist ideals and “cultural straightforwardness.” Besides, some case studies can be related to “modern d nasties”: the first Republic under the Kuo Min Tang, the red People’s Republic and the post-Maoist open country of Deng Xiao Ping. Considering the project for the new capital of Nanjing in the Thirties, Beijing’s Tianan Men area in the ifties, and Shenzhen’s utian CBD in late 20th century, I argue that cyclic patterns are still in place, though with deformations related to westernization, private interests and lack of spirituality. How far new Chinese cities are - or simply seem to be - westernized? Symbolism, invisible frameworks, repeating features and behavioural patterns make urban China just “superficiall” western. This can be well noticed in cities previousl occupied b foreigners, like Hong Kong, or in newly founded ones, like Shenzhen, where both Asians and non-Asian people can feel the gender-shift from New-York-like landscapes to something else. Current planning in main metropolitan areas shows a blurred relationship between public policies and private investments: two levels of decisions and actions, one addressing the larger scale and infrastructures, the other concerning the micro scale and development of single plots. While zoning is instrumental in this process, master plans are often laid out over a very poor cartography, so much that any relation between the formal characters of new cities and the centuries-old structure of the related territory gets lost.Keywords: China, contemporary cities, cultural heritage, shenzhen, urban planning
Procedia PDF Downloads 361183 Dynamic Wetting and Solidification
Authors: Yulii D. Shikhmurzaev
Abstract:
The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.Keywords: dynamic wetting, interface formation, phase transition, solidification
Procedia PDF Downloads 65182 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes
Authors: Sofia Lazareva, Artem Smolentsev
Abstract:
Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state
Procedia PDF Downloads 678181 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 56