Search results for: reversible logic
66 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 8865 Economic Policy to Promote small and Medium-sized Enterprises in Georgia in the Post-Pandemic Period
Authors: Gulnaz Erkomaishvili
Abstract:
Introduction: The paper assesses the impact of the COVID-19 pandemic on the activities of small and medium-sized enterprises in Georgia, identifies their problems, and analyzes the state economic policy measures. During the pandemic, entrepreneurs named the imposition of restrictions, access to financial resources, shortage of qualified personnel, high tax rates, unhealthy competition in the market, etc. as the main challenges. The Georgian government has had to take special measures to mitigate the crisis impact caused by the pandemic. For example - in 2020, they mobilized more than 1,6 billion Gel for various eventsto support entrepreneurs. Small and medium-sized entrepreneurship development strategy is presented based on the research; Corresponding conclusions are made, and recommendations are developed. Objectives: The object of research is small and medium-sized enterprises and economic-political decisions aimed at their promotion.Methodology: This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. In-depth interviews with experts were conducted to determine quantitative and qualitative indicators; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions: The COVID-19pandemic has had a significant impact on small and medium-sized enterprises. For them, Lockdown is a major challenge. Total sales volume decreased. At the same time, the innovative capabilities of enterprises and the volume of sales in remote channels have increased. As for the assessment of state support measures by small and medium-sizedentrepreneurs, despite the existence of support programs, a large number of entrepreneurs still do not evaluate the measures taken by the state positively. Among the desirable measures to be taken by the state, which would improve the activities of small and medium-sized entrepreneurs, who negatively or largely negatively assessed the activity of the state, named: tax incentives/exemption from certain taxes at the initial stage; Need for periodic trainings/organization of digital technologies, marketing training courses to improve the qualification of employees; Logic and adequacy of criteria when awarding grants and funding; Facilitating the finding of investors; Less bureaucracy, etc.Keywords: small and medium enterprises, small and medium entrepreneurship, economic policy for small and medium entrepreneurship development, government regulations in Georgia, COVID-19 pandemic
Procedia PDF Downloads 15564 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 28363 The Incoherence of the Philosophers as a Defense of Philosophy against Theology
Authors: Edward R. Moad
Abstract:
Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.Keywords: philosophy, incoherence, theology, Tahafut
Procedia PDF Downloads 16162 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 14961 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 20860 Gender Policies and Political Culture: An Examination of the Canadian Context
Authors: Chantal Maille
Abstract:
This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.Keywords: Canada, gender-based analysis, gender policies, political culture
Procedia PDF Downloads 22259 The Return of the Witches: A Class That Motivates the Analysis of Gender Bias in Engineer
Authors: Veronica Botero, Karen Ortiz
Abstract:
The Faculty of Mines, of the National University of Colombia, Medellín Campus, is a faculty that has 136 years of history and represents one of the most important study centers in the country in the field of engineering and scientific research, as well as a reference at a global, national, and Latin American level in this matter. Despite being a faculty with so many years of history and having trained a large number of graduates under the traditional mechanistic and androcentric paradigm, which reproduces the logic of the traditional scientific method and the differentiated and severe look between subject-object of research among other binarisms, has also been the place where professors and students have become aware of the need to transform this paradigm into engineering, and focus on the sustainability of diversity and the well-being of the natural and social systems that inhabit the territories and has opened possibilities for the implementation of classes that address feminist pedagogical theories and practices. The class: The return of the witches, is an initiative that constitutes an important training exercise that provides students with the study of feminisms, the importance of closing gender gaps and critical readings on the traditional paradigm of engineering. The objective of this article is to present a systematization of the experience of design, implementation and development of this elective class, describing the tensions that arose at the time when a subject of this style was created and proposed in the Department of Geosciences and Environment, from the Faculty of Mines in 2022; the reactions of the groups of students who have taken it and their perceptions and opinions about ecofeminism as proposals for critical analysis and practices in relation to the environment and, above all, how their readings of the world have changed after having studied this subject for a semester. The pedagogical journey and the feminist methodologies that have been designed and adjusted over two years of work will be explained based on the sharing of situated knowledge of the students and the two teachers who teach the course, who pose challenges to the dominant ideology in engineering since one of them is trained in human sciences and feminist studies and the other, although trained in civil engineering and geosciences, is a woman with diverse sexual orientation and is the first professor to have assumed the position of dean in the 135 years of history of the Faculty. The transformations in the life experience of the students are revealing since they affirm that the training process is forceful and powerful to outline a much more qualified and critical professional profile that contributes to the transformation of gender gaps in the country. This class is therefore a challenge in this Faculty of Engineering that still presents a dominant ideology on gender that has not been questioned or challenged before.Keywords: feminisms, gender equality, gender bias, engineering for life Manifiesto.
Procedia PDF Downloads 7058 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 5857 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie
Authors: Paulo Da Silva Quadros
Abstract:
This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading
Procedia PDF Downloads 16856 The Socio-Emotional Vulnerability of Professional Rugby Union Athletes
Authors: Hannah Kuhar
Abstract:
This paper delves into the attitudes of professional and semi-professional rugby union athletes in regard to socio-emotional vulnerability, or the willingness to express the full spectrum of human emotion in a social context. Like all humans, athletes of all sports regularly experience feelings of shame, powerlessness, and loneliness, and often feel unable to express such feelings due to factors including lack of situational support, absence of adequate expressive language and lack of resource. To this author’s knowledge, however, no previous research has considered the particular demographic of professional rugby union athletes, despite the sport’s immense popularity and economic contribution to global communities. Hence, this paper aims to extend previous research by exploring the experiences of professional rugby union athletes and their unwillingness and inability to express socio-emotional vulnerability. By having a better understanding of vulnerability in rugby and sports, this paper is able to contribute to the growing field of mental health and wellbeing research, particularly towards the emerging themes of resilience and belonging. Based on qualitative fieldwork conducted over a period of seven months across France and Australia, via the mechanisms of semi-structured interview and observation, this work uses the field theory framework of Pierre Bourdieu to construct an analysis of multidisciplinary thought. Approaching issues of gender, sexuality, physicality, education, and family, this paper shows that socio-emotional vulnerability is experienced by all players regardless of their background, in a variety of ways. Common themes and responses are drawn to show the universality of rugby’s pitfalls, which have previously been limited to specific demographics in isolation of their broader contexts. With the author themselves a semi-professional athlete, the provision of unique ‘insider’ access facilitates a deeper and more comprehensive understanding of first-hand athlete experiences, often unexplored within the context of the academic arena. The primary contention of this paper is to argue that by celebrating socio-emotional vulnerability, there becomes an opportunity to improve on-field team outcomes. Ultimately, players play better when they feel supported by their teammates, and this logic extends to the outcome of the team when socio-emotional team initiatives are widely embraced. The creation of such a culture requires deliberate and purposeful efforts, where player ownership and buy-in are high. Further study in this field may assist teams to better understand the elements which contribute to strong team culture and to strong results on the pitch.Keywords: rugby, vulnerability, athletes, France, Bourdieu
Procedia PDF Downloads 13855 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes
Authors: Sofia Lazareva, Artem Smolentsev
Abstract:
Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state
Procedia PDF Downloads 67954 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 5653 Possibilities and Limits for the Development of Care in Primary Health Care in Brazil
Authors: Ivonete Teresinha Schulter Buss Heidemann, Michelle Kuntz Durand, Aline Megumi Arakawa-Belaunde, Sandra Mara Corrêa, Leandro Martins Costa Do Araujo, Kamila Soares Maciel
Abstract:
Primary Health Care is defined as the level of a system of services that enables the achievement of answers to health needs. This level of care produces services and actions of attention to the person in the life cycle and in their health conditions or diseases. Primary Health Care refers to a conception of care model and organization of the health system that in Brazil seeks to reorganize the principles of the Unified Health System. This system is based on the principle of health as a citizen's right and duty of the State. Primary health care has family health as a priority strategy for its organization according to the precepts of the Unified Health System, structured in the logic of new sectoral practices, associating clinical work and health promotion. Thus, this study seeks to know the possibilities and limits of the care developed by professionals working in Primary Health Care. It was conducted by a qualitative approach of the participant action type, based on Paulo Freire's Research Itinerary, which corresponds to three moments: Thematic Investigation; Encoding and Decoding; and, Critical Unveiling. The themes were investigated in a health unit with the development of a culture circle with 20 professionals, from a municipality in southern Brazil, in the first half of 2021. The participants revealed as possibilities the involvement, bonding and strengthening of the interpersonal relationships of the professionals who work in the context of primary care. Promoting welcoming in primary care has favoured care and teamwork, as well as improved access. They also highlighted that care planning, the use of technologies in the process of communication and the orientation of the population enhances the levels of problem-solving capacity and the organization of services. As limits, the lack of professional recognition and the scarce material and human resources were revealed, conditions that generate tensions for health care. The reduction in the number of professionals and the low salary are pointed out as elements that boost the motivation of the health team for the development of the work. The participants revealed that due to COVID-19, the flow of care had as a priority the pandemic situation, which affected health care in primary care, and prevention and health promotion actions were canceled. The study demonstrated that empowerment and professional involvement are fundamental to promoting comprehensive and problem-solving care. However, limits of the teams are observed when exercising their activities, these are related to the lack of human and material resources, and the expansion of public health policies is urgent.Keywords: health promotion, primary health care, health professionals, welcoming.
Procedia PDF Downloads 9852 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: A Slow and Bottom-up Process
Authors: Isaura Bonneux
Abstract:
District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.Keywords: district heating and cooling, flanders, overton database, policy analysis
Procedia PDF Downloads 4451 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 37750 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks
Authors: Daniela Pellin, Wilson Engelmann
Abstract:
Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.Keywords: development, governance of law, institutionalization, triple helix
Procedia PDF Downloads 14049 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Authors: Aayushi Somani, Siba P. Samal
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR
Procedia PDF Downloads 17048 Analyzing Consumer Preferences and Brand Differentiation in the Notebook Market via Social Media Insights and Expert Evaluations
Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari
Abstract:
This study investigates consumer behavior in the notebook computer market by integrating social media sentiment analysis with expert evaluations. The rapid evolution of the notebook industry has intensified competition among manufacturers, necessitating a deeper understanding of consumer priorities. Social media platforms, particularly Twitter, have become valuable sources for capturing real-time user feedback. In this research, sentiment analysis was performed on Twitter data gathered in the last two years, focusing on seven major notebook brands. The PyABSA framework was utilized to extract sentiments associated with various notebook components, including performance, design, battery life, and price. Expert evaluations, conducted using fuzzy logic, were incorporated to assess the impact of these sentiments on purchase behavior. To provide actionable insights, the TOPSIS method was employed to prioritize notebook features based on a combination of consumer sentiments and expert opinions. The findings consistently highlight price, display quality, and core performance components, such as RAM and CPU, as top priorities across brands. However, lower-priority features, such as webcams and cooling fans, present opportunities for manufacturers to innovate and differentiate their products. The analysis also reveals subtle but significant brand-specific variations, offering targeted insights for marketing and product development strategies. For example, Lenovo's strong performance in display quality points to a competitive edge, while Microsoft's lower ranking in battery life indicates a potential area for R&D investment. This hybrid methodology demonstrates the value of combining big data analytics with expert evaluations, offering a comprehensive framework for understanding consumer behavior in the notebook market. The study emphasizes the importance of aligning product development and marketing strategies with evolving consumer preferences, ensuring competitiveness in a dynamic market. It also underscores the potential for innovation in seemingly less important features, providing companies with opportunities to create unique selling points. By bridging the gap between consumer expectations and product offerings, this research equips manufacturers with the tools needed to remain agile in responding to market trends and enhancing customer satisfaction.Keywords: consumer behavior, customer preferences, laptop industry, notebook computers, social media analytics, TOPSIS
Procedia PDF Downloads 2447 Diagnosis of Intermittent High Vibration Peaks in Industrial Gas Turbine Using Advanced Vibrations Analysis
Authors: Abubakar Rashid, Muhammad Saad, Faheem Ahmed
Abstract:
This paper provides a comprehensive study pertaining to diagnosis of intermittent high vibrations on an industrial gas turbine using detailed vibrations analysis, followed by its rectification. Engro Polymer & Chemicals Limited, a Chlor-Vinyl complex located in Pakistan has a captive combined cycle power plant having two 28 MW gas turbines (make Hitachi) & one 15 MW steam turbine. In 2018, the organization faced an issue of high vibrations on one of the gas turbines. These high vibration peaks appeared intermittently on both compressor’s drive end (DE) & turbine’s non-drive end (NDE) bearing. The amplitude of high vibration peaks was between 150-170% on the DE bearing & 200-300% on the NDE bearing from baseline values. In one of these episodes, the gas turbine got tripped on “High Vibrations Trip” logic actuated at 155µm. Limited instrumentation is available on the machine, which is monitored with GE Bently Nevada 3300 system having two proximity probes installed at Turbine NDE, Compressor DE &at Generator DE & NDE bearings. Machine’s transient ramp-up & steady state data was collected using ADRE SXP & DSPI 408. Since only 01 key phasor is installed at Turbine high speed shaft, a derived drive key phasor was configured in ADRE to obtain low speed shaft rpm required for data analysis. By analyzing the Bode plots, Shaft center line plot, Polar plot & orbit plots; rubbing was evident on Turbine’s NDE along with increased bearing clearance of Turbine’s NDE radial bearing. The subject bearing was then inspected & heavy deposition of carbonized coke was found on the labyrinth seals of bearing housing with clear rubbing marks on shaft & housing covering at 20-25 degrees on the inner radius of labyrinth seals. The collected coke sample was tested in laboratory & found to be the residue of lube oil in the bearing housing. After detailed inspection & cleaning of shaft journal area & bearing housing, new radial bearing was installed. Before assembling the bearing housing, cleaning of bearing cooling & sealing air lines was also carried out as inadequate flow of cooling & sealing air can accelerate coke formation in bearing housing. The machine was then taken back online & data was collected again using ADRE SXP & DSPI 408 for health analysis. The vibrations were found in acceptable zone as per ISO standard 7919-3 while all other parameters were also within vendor defined range. As a learning from subject case, revised operating & maintenance regime has also been proposed to enhance machine’s reliability.Keywords: ADRE, bearing, gas turbine, GE Bently Nevada, Hitachi, vibration
Procedia PDF Downloads 14646 Evaluating Urban City Indices: A Study for Investigating Functional Domains, Indicators and Integration Methods
Authors: Fatih Gundogan, Fatih Kafali, Abdullah Karadag, Alper Baloglu, Ersoy Pehlivan, Mustafa Eruyar, Osman Bayram, Orhan Karademiroglu, Wasim Shoman
Abstract:
Nowadays many cities around the world are investing their efforts and resources for the purpose of facilitating their citizen’s life and making cities more livable and sustainable by implementing newly emerged phenomena of smart city. For this purpose, related research institutions prepare and publish smart city indices or benchmarking reports aiming to measure the city’s current ‘smartness’ status. Several functional domains, various indicators along different selection and calculation methods are found within such indices and reports. The selection criteria varied for each institution resulting in inconsistency in the ranking and evaluating. This research aims to evaluate the impact of selecting such functional domains, indicators and calculation methods which may cause change in the rank. For that, six functional domains, i.e. Environment, Mobility, Economy, People, Living and governance, were selected covering 19 focus areas and 41 sub-focus (variable) areas. 60 out of 191 indicators were also selected according to several criteria. These were identified as a result of extensive literature review for 13 well known global indices and research and the ISO 37120 standards of sustainable development of communities. The values of the identified indicators were obtained from reliable sources for ten cities. The values of each indicator for the selected cities were normalized and standardized to objectively investigate the impact of the chosen indicators. Moreover, the effect of choosing an integration method to represent the values of indicators for each city is investigated by comparing the results of two of the most used methods i.e. geometric aggregation and fuzzy logic. The essence of these methods is assigning a weight to each indicator its relative significance. However, both methods resulted in different weights for the same indicator. As a result of this study, the alternation in city ranking resulting from each method was investigated and discussed separately. Generally, each method illustrated different ranking for the selected cities. However, it was observed that within certain functional areas the rank remained unchanged in both integration method. Based on the results of the study, it is recommended utilizing a common platform and method to objectively evaluate cities around the world. The common method should provide policymakers proper tools to evaluate their decisions and investments relative to other cities. Moreover, for smart cities indices, at least 481 different indicators were found, which is an immense number of indicators to be considered, especially for a smart city index. Further works should be devoted to finding mutual indicators representing the index purpose globally and objectively.Keywords: functional domain, urban city index, indicator, smart city
Procedia PDF Downloads 14745 Quantum Mechanics as A Limiting Case of Relativistic Mechanics
Authors: Ahmad Almajid
Abstract:
The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics
Procedia PDF Downloads 7844 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 28643 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 21942 Nature of Cities: Ontological Dimension of the Urban
Authors: Ana Cristina García-Luna Romero
Abstract:
This document seeks to reflect on the urban project from its conceptual identity root. In the first instance, a proposal is made on how the city project is sustained from the conceptual root, from the logos: it opens a way to assimilate the imagination; what we imagine becomes a reality. In this way, firstly, the need to use language as a vehicle for transmitting the stories that sustain us as humanity can be deemed as an important social factor that enables us to social behavior. Secondly, the need to attend to the written language as a mechanism of power, as a means to consolidate a dominant ideology or a political position, is raised; as it served to carry out the modernization project, it is therefore addressed differences between the real and the literate city. Thus, the consolidated urban-architectural project is based on logos, the project, and planning. Considering the importance of materiality and its relation to subjective well-being contextualized from a socio-urban approach, we question ourselves into how we can look at something that is doubtful. From a philosophy perspective, the truth is considered to be nothing more than a matter of correspondence between the observer and the observed. To understand beyond the relative of the gaze, it is necessary to expose different perspectives since it depends on the understanding of what is observed and how it is critically analyzed. Therefore, the analysis of materiality, as a political field, takes a proposal based on this research in the principles in transgenesis: principle of communication, representativeness, security, health, malleability, availability of potentiality or development, conservation, sustainability, economy, harmony, stability, accessibility, justice, legibility, significance, consistency, joint responsibility, connectivity, beauty, among others. The (urban) human being acts because he wants to live in a certain way: in a community, in a fair way, with opportunity for development, with the possibility of managing the environment according to their needs, etc. In order to comply with this principle, it is necessary to design strategies from the principles in transgenesis, which must be named, defined, understood, and socialized by the urban being, the companies, and from themselves. In this way, the technical status of the city in the neoliberal present determines extraordinary conditions for reflecting on an almost emergency scenario created by the impact of cities that, far from being limited to resilient proposals, must aim at the reflection of the urban process that the present social model has generated. Therefore, can we rethink the paradigm of the perception of life quality in the current neoliberal model in the production of the character of public space related to the practices of being urban. What we are trying to do within this document is to build a framework to study under what logic the practices of the social system that make sense of the public space are developed, what the implications of the phenomena of the inscription of action and materialization (and its results over political action between the social and the technical system) are and finally, how we can improve the quality of life of individuals from the urban space.Keywords: cities, nature, society, urban quality of life
Procedia PDF Downloads 12441 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 10440 The Importance of SEEQ in Teaching Evaluation of Undergraduate Engineering Education in India
Authors: Aabha Chaubey, Bani Bhattacharya
Abstract:
Evaluation of the quality of teaching in engineering education in India needs to be conducted on a continuous basis to achieve the best teaching quality in technical education. Quality teaching is an influential factor in technical education which impacts largely on learning outcomes of the students. Present study is not exclusively theory-driven, but it draws on various specific concepts and constructs in the domain of technical education. These include teaching and learning in higher education, teacher effectiveness, and teacher evaluation and performance management in higher education. Student Evaluation of Education Quality (SEEQ) was proposed as one of the evaluation instruments of the quality teaching in engineering education. SEEQ is one of the popular and standard instrument widely utilized all over the world and bears the validity and reliability in educational world. The present study was designed to evaluate the teaching quality through SEEQ in the context of technical education in India, including its validity and reliability based on the collected data. The multiple dimensionality of SEEQ that is present in every teaching and learning process made it quite suitable to collect the feedback of students regarding the quality of instructions and instructor. The SEEQ comprises of 9 original constructs i.e.; learning value, teacher enthusiasm, organization, group interaction, and individual rapport, breadth of coverage, assessment, assignments and overall rating of particular course and instructor with total of 33 items. In the present study, a total of 350 samples comprising first year undergraduate students from Indian Institute of Technology, Kharagpur (IIT, Kharagpur, India) were included for the evaluation of the importance of SEEQ. They belonged to four different courses of different streams of engineering studies. The above studies depicted the validity and reliability of SEEQ was based upon the collected data. This further needs Confirmatory Factor Analysis (CFA) and Analysis of Moment structure (AMOS) for various scaled instrument like SEEQ Cronbach’s alpha which are associated with SPSS for the examination of the internal consistency. The evaluation of the effectiveness of SEEQ in CFA is implemented on the basis of fit indices such as CMIN/df, CFI, GFI, AGFI and RMSEA readings. The major findings of this study showed the fitness indices such as ChiSq = 993.664,df = 390,ChiSq/df = 2.548,GFI = 0.782,AGFI = 0.736,CFI = 0.848,RMSEA = 0.062,TLI = 0.945,RMR = 0.029,PCLOSE = 0.006. The final analysis of the fit indices presented positive construct validity and stability, on the other hand a higher reliability was also depicted which indicated towards internal consistency. Thus, the study suggests the effectivity of SEEQ as the indicator of the quality evaluation instrument in teaching-learning process in engineering education in India. Therefore, it is expected that with the continuation of this research in engineering education there remains a possibility towards the betterment of the quality of the technical education in India. It is also expected that this study will provide an empirical and theoretical logic towards locating a construct or factor related to teaching, which has the greatest impact on teaching and learning process in a particular course or stream in engineering education.Keywords: confirmatory factor analysis, engineering education, SEEQ, teaching and learning process
Procedia PDF Downloads 42139 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 9338 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 37837 Journey to Inclusive School: Description of Crucial Sensitive Concepts in the Context of Situational Analysis
Authors: Denisa Denglerova, Radim Sip
Abstract:
Academic sources as well as international agreements and national documents define inclusion in terms of several criteria: equal opportunities, fulfilling individual needs, development of human resources, community participation. In order for these criteria to be met, the community must be cohesive. Community cohesion, which is a relatively new concept, is not determined by homogeneity, but by the acceptance of diversity among the community members and utilisation of its positive potential. This brings us to a central category of inclusion - appreciating diversity and using it to a positive effect. However, school diversity is a real phenomenon, which schools need to tackle more and more often. This is also indicated by the number of publications focused on diversity in schools. These sources present recent analyses of using identity as a tool of coping with the demands of a diversified society. The aim of this study is to identify and describe in detail the processes taking place in selected schools, which contribute to their pro-inclusive character. The research is designed around a multiple case study of three pro-inclusive schools. Paradigmatically speaking, the research is rooted in situational epistemology. This is also related to the overall framework of interpretation, for which we are going to use innovative methods of situational analysis. In terms of specific research outcomes this will manifest itself in replacing the idea of “objective theory” by the idea of “detailed cartography of a social world”. The cartographic approach directs both the logic of data collection and the choice of methods of their analysis and interpretation. The research results include detection of the following sensitive concepts: Key persons. All participants can contribute to promoting an inclusion-friendly environment; however, some do so with greater motivation than others. These could include school management, teachers with a strong vision of equality, or school counsellors. They have a significant effect on the transformation of the school, and are themselves deeply convinced that inclusion is necessary. Accordingly, they select suitable co-workers; they also inspire some of the other co-workers to make changes, leading by example. Employees with strongly opposing views gradually leave the school, and new members of staff are introduced to the concept of inclusion and openness from the beginning. Manifestations of school openness in working with diversity on all important levels. By this we mean positive manipulation with diversity both in the relationships between “traditional” school participants (directors, teachers, pupils) and school-parent relationships, or relationships between schools and the broader community, in terms of teaching methods as well as ways how the school culture affects the school environment. Other important detected concepts significantly helping to form a pro-inclusive environment in the school are individual and parallel classes; freedom and responsibility of both pupils and teachers, manifested on the didactic level by tendencies towards an open curriculum; ways of asserting discipline in the school environment.Keywords: inclusion, diversity, education, sensitive concept, situational analysis
Procedia PDF Downloads 199