Search results for: classical physics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1427

Search results for: classical physics

1187 Interdisciplinary Approach in Vocational Training for Orthopaedic Surgery

Authors: Mihail Nagea, Olivera Lupescu, Elena Taina Avramescu, Cristina Patru

Abstract:

Classical education of orthopedic surgeons involves lectures, self study, workshops and cadaver dissections, and sometimes supervised practical training within surgery, which quite seldom gives the young surgeons the feeling of being unable to apply what they have learned especially in surgical practice. The purpose of this paper is to present a different approach from the classical one, which enhances the practical skills of the orthopedic trainees and prepare them for future practice. The paper presents the content of the research project 2015-1-RO01-KA202-015230, ERASMUS+ VET ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery’ which, using e learning as a basic tool , delivers to the trainees not only courses, but especially practical information through videos and case scenarios including gait analysis in order to build patient focused therapeutic plans, adapted to the characteristics of each patient. The outcome of this project is to enhance the practical skills in orthopedic surgery and the results are evaluated following the answers to the questionnaires, but especially the reactions within the case scenarios. The participants will thus follow the idea that any mistake within solving the cases might represent a failure of treating a real patient. This modern approach, besides using interactivity to evaluate the theoretical and practical knowledge of the trainee, increases the sense of responsibility, as well as the ability to react properly in real cases.

Keywords: interdisciplinary approach, gait analysis, orthopedic surgery, vocational training

Procedia PDF Downloads 220
1186 Analysis of Pollution in Agriculture Land Using Decagon Em-50 and Rock Magnetism Method

Authors: Adinda Syifa Azhari, Eleonora Agustine, Dini Fitriani

Abstract:

This measurement has been done to analyze the impact of industrial pollution on the environment. Our research is to indicate the soil which has contained some pollution by industrial activity around the area, especially in Sumedang, West Java. The parameter phsyics such as total dissolved solid, volumetric water content, electrical conductivity bulk and FD have shown that the soil has polluted and measured by Decagon EM 50. Decagon EM 50 is one of the geophysical environment instrumentation that is used to interpret the soil condition. This experiment has given a result of these parameter physics, these are: Volumetric water content (m³/m³) = 0,154 – 0,384; Electrical Conductivity Bulk (dS/m) = 0,29 – 1,11 ; Dielectric Permittivity (DP) = 77,636 – 78, 339.Based on these data, we have got the conclusion that the area has, in fact, been contaminated by dangerous materials. VWC is parameter physics that has shown water in soil. The data show the pollution of the soil at the place, of which the specifications are PH, Total Dissolved Solid (TDS), Electrical Conductivity (EC) bigger (>>) and Frequency Dependent (FD) smaller (<<); that means the soil is alkali with big grain and has high salt concentration.

Keywords: Decagon EM 50, electrical conductivity, industrial textiles, land, pollution

Procedia PDF Downloads 359
1185 Corrosion Behvaior of CS1018 in Various CO2 Capture Solvents

Authors: Aida Rafat, Ramazan Kahraman, Mert Atilhan

Abstract:

The aggressive corrosion behavior of conventional amine solvents is one of main barriers against large scale commerizaliation of amine absorption process for carbon capture application. Novel CO2 absorbents that exhibit minimal corrosivity against operation conditions are essential to lower corrosion damage and control and ensure more robustness in the capture plant. This work investigated corrosion behavior of carbon steel CS1018 in various CO2 absrobent solvents. The tested solvents included the classical amines MEA, DEA and MDEA, piperazine activated solvents MEA/PZ, MDEA/PZ and MEA/MDEA/PZ as well as mixtures of MEA and Room Temperature Ionic Liquids RTIL, namely MEA/[C4MIM][BF4] and MEA/[C4MIM][Otf]. Electrochemical polarization technique was used to determine the system corrosiveness in terms of corrosion rate and polarization behavior. The process parameters of interest were CO2 loading and solution temperature. Electrochemical resulted showed corrosivity order of classical amines at 40°C is MDEA> MEA > DEA wherase at 80°C corrosivity ranking changes to MEA > DEA > MDEA. Corrosivity rankings were mainly governed by CO2 absorption capacity at the test temperature. Corrosivity ranking for activated amines at 80°C was MEA/PZ > MDEA/PZ > MEA/MDEA/PZ. Piperazine addition seemed to have a dual advanatge in terms of enhancing CO2 absorption capacity as well as nullifying corrosion. For MEA/RTIL mixtures, the preliminary results showed that the partial repalcement of aqueous phase in MEA solution by the more stable nonvolatile RTIL solvents reduced corrosion rates considerably.

Keywords: corrosion, amines, CO2 capture, piperazine, ionic liquids

Procedia PDF Downloads 435
1184 A Study of Learning Achievement for Heat Transfer by Using Experimental Sets of Convection with the Predict-Observe-Explain Teaching Technique

Authors: Wanlapa Boonsod, Nisachon Yangprasong, Udomsak Kitthawee

Abstract:

Thermal physics education is a complicated and challenging topic to discuss in any classroom. As a result, most students tend to be uninterested in learning this topic. In the current study, a convection experiment set was devised to show how heat can be transferred by a convection system to a thermoelectric plate until a LED flashes. This research aimed to 1) create a natural convection experimental set, 2) study learning achievement on the convection experimental set with the predict-observe-explain (POE) technique, and 3) study satisfaction for the convection experimental set with the predict-observe-explain (POE) technique. The samples were chosen by purposive sampling and comprised 28 students in grade 11 at Patumkongka School in Bangkok, Thailand. The primary research instrument was the plan for predict-observe-explain (POE) technique on heat transfer using a convection experimental set. Heat transfer experimental set by convection. The instruments used to collect data included a heat transfer achievement model by convection, a Satisfaction Questionnaire after the learning activity, and the predict-observe-explain (POE) technique for heat transfer using a convection experimental set. The research format comprised a one-group pretest-posttest design. The data was analyzed by GeoGebra program. The statistics used in the research were mean, standard deviation and t-test for dependent samples. The results of the research showed that achievement on heat transfer using convection experimental set was composed of thermo-electrics on the top side attached to the heat sink and another side attached to a stainless plate. Electrical current was displayed by the flashing of a 5v LED. The entire set of thermo-electrics was set up on the top of the box and heated by an alcohol burner. The achievement of learning was measured with the predict-observe-explain (POE) technique, with the natural convection experimental set statistically higher than before learning at a 0.01 level. Satisfaction with POE for physics learning of heat transfer by using convection experimental set was at a high level (4.83 from 5.00).

Keywords: convection, heat transfer, physics education, POE

Procedia PDF Downloads 187
1183 An Attempt to Improve Student´s Understanding on Thermal Conductivity Using Thermal Cameras

Authors: Mariana Faria Brito Francisquini

Abstract:

Many thermal phenomena are present and play a substantial role in our daily lives. This presence makes the study of this area at both High School and University levels a very widely explored topic in the literature. However, a lot of important concepts to a meaningful understanding of the world are neglected at the expense of a traditional approach with senseless algebraic problems. In this work, we intend to show how the introduction of new technologies in the classroom, namely thermal cameras, can work in our favor to make a clearer understanding of many of these concepts, such as thermal conductivity. The use of thermal cameras in the classroom tends to diminish the everlasting abstractness in thermal phenomena as they enable us to visualize something that happens right before our eyes, yet we cannot see it. In our study, we will provide the same amount of heat to metallic cylindrical rods of the same length, but different materials in order to study the thermal conductivity of each one. In this sense, the thermal camera allows us to visualize the increase in temperature along each rod in real time enabling us to infer how heat is being transferred from one part of the rod to another. Therefore, we intend to show how this approach can contribute to the exposure of students to more enriching, intellectually prolific, scenarios than those provided by traditional approaches.

Keywords: teaching physics, thermal cameras, thermal conductivity, thermal physics

Procedia PDF Downloads 248
1182 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 160
1181 From Colonial Outpost to Cultural India: Folk Epics of India

Authors: Jyoti Brahma

Abstract:

Folk epics of India are found in various Indian languages. The study of folk epics and its importance in folkloristic study in India came into prominence only during the nineteenth century. The British administrators and missionaries collected and documented folk epics from various parts of the country. The paper is an attempt to investigate how colonial outpost appears to penetrate the interiors of Indian land and society and triggered off the Indian Renaissance. It takes into account the compositions of the epics of India and the attention it received during the nineteenth century, which in turn gave, rise to the national consciousness shaping the culture of India. Composed as oral traditions these folk epics are now seen as repositories of historical consciousness whereas in earlier times societies without literacy were said to be without history. So, there is an urgent need to re-examine the British impact on Indian literary traditions. The Bhakti poets through their nuanced responses in their efforts to change the behavior of Indian society gives us the perfect example of deferment in the clear cut distinction between the folk and the classical in the context of India. It evades a pure categorization and classification of the classical and constitutes part of the folk traditions of the cultural heritage of India. Therefore, the ethical question of what is ontologically known as ordinary discourse in the case of the “folk” forms metaphors and folk language gains importance once more. The paper also thus seeks simultaneously to outline the significant factors responsible for shaping the destiny of folklore in South India particularly the four political states of the Indian Union: Andhra Pradesh, Karnataka, Kerala and Tamil Nadu, what could be termed as South Indian “cultural zones”.

Keywords: colonial, folk, folklore, tradition

Procedia PDF Downloads 288
1180 Consideration of Magnetic Lines of Force as Magnets Produced by Percussion Waves

Authors: Angel Pérez Sánchez

Abstract:

Background: Consider magnetic lines of force as a vector magnetic current was introduced by convention around 1830. But this leads to a dead end in traditional physics, and quantum explanations must be referred to explain the magnetic phenomenon. However, a study of magnetic lines as percussive waves leads to other paths capable of interpreting magnetism through traditional physics. Methodology: Brick used in the experiment: two parallel electric current cables attract each other if current goes in the same direction and its application at a microscopic level inside magnets. Significance: Consideration of magnetic lines as magnets themselves would mean a paradigm shift in the study of magnetism and open the way to provide solutions to mysteries of magnetism until now only revealed by quantum mechanics. Major findings: discover how a magnetic field is created, as well as reason how magnetic attraction and repulsion work, understand how magnets behave when splitting them, and reveal the impossibility of a Magnetic Monopole. All of this is presented as if it were a symphony in which all the notes fit together perfectly to create a beautiful, smart, and simple work.

Keywords: magnetic lines of force, magnetic field, magnetic attraction and repulsion, magnet split, magnetic monopole, magnetic lines of force as magnets, magnetic lines of force as waves

Procedia PDF Downloads 42
1179 Indenyl and Allyl Palladates: Synthesis, Bonding, and Anticancer Activity

Authors: T. Scattolin, E. Cavarzerani, F. Visentin, F. Rizzolio

Abstract:

Organopalladium compounds have recently attracted attention for their high stability even under physiological conditions and, above all, for their remarkable in vitro cytotoxicity towards cisplatin-resistant cell lines. Among the organopalladium derivatives, those bearing at least one N-heterocyclic carbene ligand (NHC) and the Pd(II)-η³-allyl fragment have exhibited IC₅₀ values in the micro and sub-micromolar range towards several cancer cell lines in vitro and in some cases selectivity towards cancerous vs. non-tumorigenic cells. Herein, a selection of allyl and indenyl palladates were synthesized using a solvent-free method consisting of grinding the corresponding palladium precursors with different saturated and unsaturated azolium salts. All compounds have been fully characterized by NMR, XRD and elemental analyses. The intramolecular H, Cl interaction has been elucidated and quantified using the Voronoi Deformation Density scheme. Most of the complexes showed excellent cytotoxicity towards ovarian cancer cell lines, with I₅₀ values comparable to or even lower than cisplatin. Interestingly, the potent anticancer activity was also confirmed in a high-serous ovarian cancer (HGSOC) patient-derived tumoroid, with a clear superiority of this class of compounds over classical platinum-based agents. Finally, preliminary enzyme inhibition studies of the synthesized palladate complexes against the model TrxR show that the compounds have high activity comparable to or even higher than auranofin and classical Au(I) NHC complexes. Based on such promising data, further in vitro and in vivo experiments and in-depth mechanistic studies are ongoing in our laboratories.

Keywords: anticancer activity, palladium complexes, organoids, indenyl and allyl ligands

Procedia PDF Downloads 68
1178 Cluster Analysis of Students’ Learning Satisfaction

Authors: Purevdolgor Luvsantseren, Ajnai Luvsan-Ish, Oyuntsetseg Sandag, Javzmaa Tsend, Akhit Tileubai, Baasandorj Chilhaasuren, Jargalbat Puntsagdash, Galbadrakh Chuluunbaatar

Abstract:

One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model.

Keywords: questionnaire, data mining, k-means method, silhouette coefficient

Procedia PDF Downloads 12
1177 The Development, Composition, and Implementation of Vocalises as a Method of Technical Training for the Adult Musical Theatre Singer

Authors: Casey Keenan Joiner, Shayna Tayloe

Abstract:

Classical voice training for the novice singer has long relied on the guidance and instruction of vocalise collections, such as those written and compiled by Marchesi, Lütgen, Vaccai, and Lamperti. These vocalise collections purport to encourage healthy vocal habits and instill technical longevity in both aspiring and established singers, though their scope has long been somewhat confined to the classical idiom. For pedagogues and students specializing in other vocal genres, such as musical theatre and CCM (contemporary commercial music,) low-impact and pertinent vocal training aids are in short supply, and much of the suggested literature derives from classical methodology. While the tenants of healthy vocal production remain ubiquitous, specific stylistic needs and technical emphases differ from genre to genre and may require a specified extension of vocal acuity. As musical theatre continues to grow in popularity at both the professional and collegiate levels, the need for specialized training grows as well. Pedagogical literature geared specifically towards musical theatre (MT) singing and vocal production, while relatively uncommon, is readily accessible to the contemporary educator. Practitioners such as Norman Spivey, Mary Saunders Barton, Claudia Friedlander, Wendy Leborgne, and Marci Rosenberg continue to publish relevant research in the field of musical theatre voice pedagogy and have successfully identified many common MT vocal faults, their subsequent diagnoses, and their eventual corrections. Where classical methodology would suggest specific vocalises or training exercises to maintain corrected vocal posture following successful fault diagnosis, musical theatre finds itself without a relevant body of work towards which to transition. By analyzing the existing vocalise literature by means of a specialized set of parameters, including but not limited to melodic variation, rhythmic complexity, vowel utilization, and technical targeting, we have composed a set of vocalises meant specifically to address the training and conditioning of adult musical theatre voices. These vocalises target many pedagogical tenants in the musical theatre genre, including but not limited to thyroarytenoid-dominant production, twang resonance, lateral vowel formation, and “belt-mix.” By implementing these vocalises in the musical theatre voice studio, pedagogues can efficiently communicate proper musical theatre vocal posture and kinesthetic connection to their students, regardless of age or level of experience. The composition of these vocalises serves MT pedagogues on both a technical level as well as a sociological one. MT is a relative newcomer on the collegiate stage and the academization of musical theatre methodologies has been a slow and arduous process. The conflation of classical and MT techniques and training methods has long plagued the world of voice pedagogy and teachers often find themselves in positions of “cross-training,” that is, teaching students of both genres in one combined voice studio. As MT continues to establish itself on academic platforms worldwide, genre-specific literature and focused studies are both rare and invaluable. To ensure that modern students receive exacting and definitive training in their chosen fields, it becomes increasingly necessary for genres such as musical theatre to boast specified literature and a collection of musical theatre-specific vocalises only aids in this effort. This collection of musical theatre vocalises is the first of its kind and provides genre-specific studios with a basis upon which to grow healthy, balanced voices built for the harsh conditions of the modern theatre stage.

Keywords: voice pedagogy, targeted methodology, musical theatre, singing

Procedia PDF Downloads 127
1176 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 546
1175 Electron Impact Ionization Cross-Sections for e-C₅H₅N₅ Scattering

Authors: Manoj Kumar

Abstract:

Ionization cross sections of molecules due to electron impact play an important role in chemical processes in various branches of applied physics, such as radiation chemistry, gas discharges, plasmas etching in semiconductors, planetary upper atmospheric physics, mass spectrometry, etc. In the present work, we have calculated the total ionization cross sections for Adenine (C₅H₅N₅), a biologically important molecule, by electron impact in the incident electron energy range from ionization threshold to 2 keV employing a well-known Jain-Khare semiempirical formulation based on Bethe and Möllor cross sections. In the non-availability of the experimental results, the present results are in good agreement qualitatively as well as quantitatively with available theoretical results. The present results drive our confidence for further investigation of complex bio-molecule with better accuracy. Notwithstanding, the present method can deduce reliable cross-sectional data for complex targets with adequate accuracy and may facilitate the acclimatization of calculated cross-sections into atomic molecular cross-section data sets for modeling codes and other applications.

Keywords: electron impact ionization cross-sections, oscillator strength, jain-khare semiempirical approach

Procedia PDF Downloads 83
1174 Using Variation Theory in a Design-based Approach to Improve Learning Outcomes of Teachers Use of Video and Live Experiments in Swedish Upper Secondary School

Authors: Andreas Johansson

Abstract:

Conceptual understanding needs to be grounded on observation of physical phenomena, experiences or metaphors. Observation of physical phenomena using demonstration experiments has a long tradition within physics education and students need to develop mental models to relate the observations to concepts from scientific theories. This study investigates how live and video experiments involving an acoustic trap to visualize particle-field interaction, field properties and particle properties can help develop students' mental models and how they can be used differently to realize their potential as teaching tools. Initially, they were treated as analogs and the lesson designs were kept identical. With a design-based approach, the experimental and video designs, as well as best practices for a respective teaching tool, were then developed in iterations. Variation theory was used as a theoretical framework to analyze the planned respective realized pattern of variation and invariance in order to explain learning outcomes as measured by a pre-posttest consisting of conceptual multiple-choice questions inspired by the Force Concept Inventory and the Force and Motion Conceptual Evaluation. Interviews with students and teachers were used to inform the design of experiments and videos in each iteration. The lesson designs and the live and video experiments has been developed to help teachers improve student learning and make school physics more interesting by involving experimental setups that usually are out of reach and to bridge the gap between what happens in classrooms and in science research. As students’ conceptual knowledge also rises their interest in physics the aim is to increase their chances of pursuing careers within science, technology, engineering or mathematics.

Keywords: acoustic trap, design-based research, experiments, variation theory

Procedia PDF Downloads 56
1173 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 166
1172 On Lie-Central Derivations and Almost Inner Lie-Derivations of Leibniz Algebras

Authors: Natalia Pacheco Rego

Abstract:

The Liezation functor is a map from the category of Leibniz algebras to the category of Lie algebras, which assigns a Leibniz algebra to the Lie algebra given by the quotient of the Leibniz algebra by the ideal spanned by the square elements of the Leibniz algebra. This functor is left adjoint to the inclusion functor that considers a Lie algebra as a Leibniz algebra. This environment fits in the framework of central extensions and commutators in semi-abelian categories with respect to a Birkhoff subcategory, where classical or absolute notions are relative to the abelianization functor. Classical properties of Leibniz algebras (properties relative to the abelianization functor) were adapted to the relative setting (with respect to the Liezation functor); in general, absolute properties have the corresponding relative ones, but not all absolute properties immediately hold in the relative case, so new requirements are needed. Following this line of research, it was conducted an analysis of central derivations of Leibniz algebras relative to the Liezation functor, called as Lie-derivations, and a characterization of Lie-stem Leibniz algebras by their Lie-central derivations was obtained. In this paper, we present an overview of these results, and we analyze some new properties concerning Lie-central derivations and almost inner Lie-derivations. Namely, a Leibniz algebra is a vector space equipped with a bilinear bracket operation satisfying the Leibniz identity. We define the Lie-bracket by [x, y]lie = [x, y] + [y, x] , for all x, y . The Lie-center of a Leibniz algebra is the two-sided ideal of elements that annihilate all the elements in the Leibniz algebra through the Lie-bracket. A Lie-derivation is a linear map which acts as a derivative with respect to the Lie-bracket. Obviously, usual derivations are Lie-derivations, but the converse is not true in general. A Lie-derivation is called a Lie-central derivation if its image is contained in the Lie-center. A Lie-derivation is called an almost inner Lie-derivation if the image of an element x is contained in the Lie-commutator of x and the Leibniz algebra. The main results we present in this talk refer to the conditions under which Lie-central derivation and almost inner Lie-derivations coincide.

Keywords: almost inner Lie-derivation, Lie-center, Lie-central derivation, Lie-derivation

Procedia PDF Downloads 112
1171 Two-Photon-Exchange Effects in the Electromagnetic Production of Pions

Authors: Hui-Yun Cao, Hai-Qing Zhou

Abstract:

The high precision measurements and experiments play more and more important roles in particle physics and atomic physics. To analyse the precise experimental data sets, the corresponding precise and reliable theoretical calculations are necessary. Until now, the form factors of elemental constituents such as pion and proton are still attractive issues in current Quantum Chromodynamics (QCD). In this work, the two-photon-exchange (TPE) effects in ep→enπ⁺ at small -t are discussed within a hadronic model. Under the pion dominance approximation and the limit mₑ→0, the TPE contribution to the amplitude can be described by a scalar function. We calculate TPE contributions to the amplitude, and the unpolarized differential cross section with the only elastic intermediate state is considered. The results show that the TPE corrections to the unpolarized differential cross section are about from -4% to -20% at Q²=1-1.6 GeV². After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyze the TPE corrections to the separated cross sections σ(L,T,LT,TT). We find that the TPE corrections (at Q²=1-1.6 GeV²) to σL are about from -10% to -30%, to σT are about 20%, and to σ(LT,TT) are much larger. By these analyses, we conclude that the TPE contributions in ep→enπ⁺ at small -t are important to extract the separated cross sections σ(L,T,LT,TT) and the electromagnetic form factor of π⁺ in the experimental analysis.

Keywords: differential cross section, form factor, hadronic, two-photon

Procedia PDF Downloads 101
1170 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 31
1169 Properties of Modified Dry Masonry Mixtures for Effective Masonry Units

Authors: Vyacheslav S. Semenov, Tamara A. Rozovskaya

Abstract:

The paper is devoted to the problem of the development of dry light-weight mixtures with hollow ceramics microspheres (CMS) for masonry works. For the one-layer fencing structures including effective masonry units, the use of “warm” masonry mortars is necessary. The used light-weight masonry mortars do not provide the brand strength and thermal uniformity of the fencing structures because of high average density. The CMS are effective light-weight aggregate for such mortars. The influence of the dosage of CMS on the physics-and-mechanics parameters and the technological properties of the masonry mortars were studied. The optimal mixture compositions have been obtained and their main properties have been determined. The influence of an air-entraining admixture and redispersible polymer powders on the average density and physics-and-mechanics parameters of the masonry mortars were studied. The optimal compositions of light-weight dry masonry mixtures with CMS have been suggested.

Keywords: dry mortar mixtures, light-weight dry mixtures, hollow ceramics microspheres, masonry mortars, “warm” mortars, air-entraining admixture, redispersible polymer powders

Procedia PDF Downloads 475
1168 Status Report of the GERDA Phase II Startup

Authors: Valerio D’Andrea

Abstract:

The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.

Keywords: gerda, double beta decay, LNGS, germanium

Procedia PDF Downloads 343
1167 Standard Model-Like Higgs Decay into Displaced Heavy Neutrino Pairs in U(1)' Models

Authors: E. Accomando, L. Delle Rose, S. Moretti, E. Olaiya, C. Shepherd-Themistocleous

Abstract:

Heavy sterile neutrinos are almost ubiquitous in the class of Beyond Standard Model scenarios aimed at addressing the puzzle that emerged from the discovery of neutrino flavour oscillations, hence the need to explain their masses. In particular, they are necessary in a U(1)’ enlarged Standard Model (SM). We show that these heavy neutrinos can be rather long-lived producing distinctive displaced vertices and tracks. Indeed, depending on the actual decay length, they can decay inside a Large Hadron Collider (LHC) detector far from the main interaction point and can be identified in the inner tracking system or the muon chambers, emulated here through the Compact Muon Solenoid (CMS) detector parameters. Among the possible production modes of such heavy neutrino, we focus on their pair production mechanism in the SM Higgs decay, eventually yielding displaced lepton signatures following the heavy neutrino decays into weak gauge bosons. By employing well-established triggers available for the CMS detector and using the data collected by the end of the LHC Run 2, these signatures would prove to be accessible with negligibly small background. Finally, we highlight the importance that the exploitation of new triggers, specifically, displaced tri-lepton ones, could have for this displaced vertex search.

Keywords: beyond the standard model, displaced vertex, Higgs physics, neutrino physics

Procedia PDF Downloads 110
1166 Agency Beyond Metaphysics of Subjectivity

Authors: Erik Kuravsky

Abstract:

One of the problems with a post-structuralist account of agency is that it appears to reject the freedom of an acting subject, thus seeming to deny the very phenomenon of agency. However, this is only a problem if we think that human beings can be agents exclusively in terms of being subjects, that is, if we think agency subjectively. Indeed, we tend to understand traditional theories of human freedom (e.g., Plato’s or Kant’s) in terms of a peculiar ability of the subject. The paper suggests to de-subjectivize agency with the help of Heidegger’s later thought. To do it, ir argues that classical theories of agency may indeed be interpreted as subject-oriented (sometimes even by their authors), but do not have to be read as such. Namely, the claim is that what makes agency what it is, what is essential in agency, is not its belonginess to a subject, but its ontological configuration. We may say that agency “happens,” and that there is a very specific ontological characteristics to this happening. The argument of the paper is that we can find these characteristic in the classical accounts of agency and that these characteristics are sufficient to distinguish human freedom from other natural phenomena. In particular, it offers to think agency not as one of human characteristics, but as an ontological event in which human beings take part. Namely, agency is a (non-human) characteristic of the different modes in which the experienceable existence of beings is determined by Being. To be an agent then is to participate in such ontological determination. What enables this participation is the ways human beings non-thematically understand the ontological difference. For example, for Plato, one acts freely only if one is led by an idea of the good, while for Kant the imperative for free action is categorial. The agency of an agent is thus dependent on the differentiation between ideas/categories and beings met in experience – one is “free” from contingent sensibility in terms of what is different from it ontologically. In this light, modern dependence on subjectivity is evident in the fact that the ontological difference is thought as belonging to one’s thinking, consciousness etc. That is, it is taken subjectively. A non-subjective account of agency, on the other hand, requires thinking this difference as belonging to Being itself, and thinking human beings as a medium within which occurs the non-human force of ontological differentiation.

Keywords: Heidegger, freedom, agency, poststructuralism

Procedia PDF Downloads 171
1165 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution

Authors: A. Amar

Abstract:

A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.

Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be

Procedia PDF Downloads 142
1164 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 171
1163 Non-Singular Gravitational Collapse of a Homogeneous Scalar Field in Deformed Phase Space

Authors: Amir Hadi Ziaie

Abstract:

In the present work, we revisit the collapse process of a spherically symmetric homogeneous scalar field (in FRW background) minimally coupled to gravity, when the phase-space deformations are taken into account. Such a deformation is mathematically introduced as a particular type of noncommutativity between the canonical momenta of the scale factor and of the scalar field. In the absence of such deformation, the collapse culminates in a spacetime singularity. However, when the phase-space is deformed, we find that the singularity is removed by a non-singular bounce, beyond which the collapsing cloud re-expands to infinity. More precisely, for negative values of the deformation parameter, we identify the appearance of a negative pressure, which decelerates the collapse to finally avoid the singularity formation. While in the un-deformed case, the horizon curve monotonically decreases to finally cover the singularity, in the deformed case the horizon has a minimum value that this value depends on deformation parameter and initial configuration of the collapse. Such a setting predicts a threshold mass for black hole formation in stellar collapse and manifests the role of non-commutative geometry in physics and especially in stellar collapse and supernova explosion.

Keywords: gravitational collapse, non-commutative geometry, spacetime singularity, black hole physics

Procedia PDF Downloads 316
1162 Historical Geography of Lykaonia Region

Authors: Asuman Baldiran, Erdener Pehlivan

Abstract:

In this study, the root of the name Lykaonia and the geographical area defined as Lykaonia Region are mentioned. In this context, information concerning the settlements of Paleolithic Age, Neolithic Age and Chalcolithic Age are given place. Particularly the settlements belonging to Classical Age are localized and brief information about the history of these settlements is provided. In the light of this information, roads of Antique period in the region are evaluated.

Keywords: ancient cities, central anatolia, historical geography, Lykaonia region

Procedia PDF Downloads 351
1161 Pharmaceutical Applications of Newton's Second Law and Disc Inertia

Authors: Nicholas Jensen

Abstract:

As the effort to create new drugs to treat rare conditions cost-effectively intensifies, there is a need to ensure maximum efficiency in the manufacturing process. This includes the creation of ultracompact treatment forms, which can best be achieved via applications of fundamental laws of physics. This paper reports an experiment exploring the relationship between the forms of Newton's 2ⁿᵈ Law appropriate to linear motion and to transversal architraves. The moment of inertia of three discs was determined by experiments and compared with previous data derived from a theoretical relationship. The method used was to attach the discs to a moment arm. Comparing the results with those obtained from previous experiments, it is found to be consistent with the first law of thermodynamics. It was further found that Newton's 2ⁿᵈ law violates the second law of thermodynamics. The purpose of this experiment was to explore the relationship between the forms of Newton's 2nd Law appropriate to linear motion and to apply torque to a twisting force, which is determined by position vector r and force vector F. Substituting equation alpha in place of beta; angular acceleration is a linear acceleration divided by radius r of the moment arm. The nevrological analogy of Newton's 2nd Law states that these findings can contribute to a fuller understanding of thermodynamics in relation to viscosity. Implications for the pharmaceutical industry will be seen to be fruitful from these findings.

Keywords: Newtonian physics, inertia, viscosity, pharmaceutical applications

Procedia PDF Downloads 90
1160 Dielectric Spectroscopy Investigation of Hydrophobic Silica Aerogel

Authors: Deniz Bozoglu, Deniz Deger, Kemal Ulutas, Sahin Yakut

Abstract:

In recent years, silica aerogels have attracted great attention due to their outstanding properties, and their wide variety of potential applications such as microelectronics, nuclear and high-energy physics, optics and acoustics, superconductivity, space-physics. Hydrophobic silica aerogels were successfully synthesized in one-step by surface modification at ambient pressure. FT-IR result confirmed that Si-OH groups were successfully converted into hydrophobic and non-polar Si-CH3 groups by surface modification using trimethylchloro silane (TMCS) as co-precursor. Using Alpha-A High-Resolution Dielectric, Conductivity and Impedance Analyzer, AC conductivity of samples were examined at temperature range 293-423 K and measured over frequency range between 1-106 Hz. The characteristic relaxation time decreases with increasing temperature. The AC conductivity follows σ_AC (ω)=σ_t-σ_DC=Aω^s relation at frequencies higher than 10 Hz, and the dominant conduction mechanism is found to obey the Correlated Barrier Hopping (CBH) mechanism. At frequencies lower than 10 Hz, the electrical conduction is found to be in accordance with DC conduction mechanism. The activation energies obtained from AC conductivity results and it was observed two relaxation regions.

Keywords: aerogel, synthesis, dielectric constant, dielectric loss, relaxation time

Procedia PDF Downloads 169
1159 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning

Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag

Abstract:

The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.

Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling

Procedia PDF Downloads 49
1158 Evaluation of the Sustainability of Greek Vernacular Architecture in Different Climate Zones: Architectural Typology and Building Physics

Authors: Christina Kalogirou

Abstract:

Investigating the integration of bioclimatic design into vernacular architecture could lead to interesting results regarding the preservation of cultural heritage while enhancing the energy efficiency of historic buildings. Furthermore, these recognized principles and systems of bioclimatic design in vernacular settlements could be applied to modern architecture and thus to new buildings in such areas. This study introduces an approach to categorizing distinct technologies and design principles of bioclimatic design based on a thoughtful approach to various climatic zones and environment in Greece (mountainous areas, islands and lowlands). For this purpose, various types of dwellings are evaluated for their response to climate, regarding the layout of the buildings (orientation, floor plans’ shape, semi-open spaces), the site planning, the openings (size, position, protection), the building envelope (walls: construction materials-thickness, roof construction detailing) and the migratory living pattern according to seasonal needs. As a result, various passive design principles (that could be adapted to current architectural practice in such areas, in order to optimize the relationship between site, building, climate and energy efficiency) are proposed.

Keywords: bioclimatic design, buildings physics, climatic zones, energy efficiency, vernacular architecture

Procedia PDF Downloads 351