Search results for: statistical physics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4407

Search results for: statistical physics

3897 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 321
3896 Effects of Process Parameter Variation on the Surface Roughness of Rapid Prototyped Samples Using Design of Experiments

Authors: R. Noorani, K. Peerless, J. Mandrell, A. Lopez, R. Dalberto, M. Alzebaq

Abstract:

Rapid prototyping (RP) is an additive manufacturing technology used in industry that works by systematically depositing layers of working material to construct larger, computer-modeled parts. A key challenge associated with this technology is that RP parts often feature undesirable levels of surface roughness for certain applications. To combat this phenomenon, an experimental technique called Design of Experiments (DOE) can be employed during the growth procedure to statistically analyze which RP growth parameters are most influential to part surface roughness. Utilizing DOE to identify such factors is important because it is a technique that can be used to optimize a manufacturing process, which saves time, money, and increases product quality. In this study, a four-factor/two level DOE experiment was performed to investigate the effect of temperature, layer thickness, infill percentage, and infill speed on the surface roughness of RP prototypes. Samples were grown using the sixteen different possible growth combinations associated with a four-factor/two level study, and then the surface roughness data was gathered for each set of factors. After applying DOE statistical analysis to these data, it was determined that layer thickness played the most significant role in the prototype surface roughness.

Keywords: rapid prototyping, surface roughness, design of experiments, statistical analysis, factors and levels

Procedia PDF Downloads 257
3895 Dual Duality for Unifying Spacetime and Internal Symmetry

Authors: David C. Ni

Abstract:

The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.

Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum

Procedia PDF Downloads 390
3894 Premalignant and Malignant Lesions of Uterine Polyps: Analysis at a University Hospital

Authors: Manjunath A. P., Al-Ajmi G. M., Al Shukri M., Girija S

Abstract:

Introduction: This study aimed to compare the ability of hysteroscopy and ultrasonography to diagnose uterine polyps. To correlate the ultrasonography and hystroscopic findings with various clinical factors and histopathology of uterine polyps. Methods: This is a retrospective study conducted at the Department of Obstetrics and Gynaecology at Sultan Qaboos University Hospital from 2014 to 2019. All women undergoing hysteroscopy for suspected uterine polyps were included. All relevant data were obtained from the electronic patient record and analysed using SPSS. Results: A total of 77 eligible women were analysed. The mean age of the patients was 40 years. The clinical risk factors; obesity, hypertension, and diabetes mellitus, showed no significant statistical association with the presence of uterine polyps (p-value>0.005). Although 20 women (52.6%) with uterine polyps had thickened endometrium (>11 mm), however, there is no statistical association (p-value>0.005). The sensitivity and specificity of ultrasonography in the detection of uterine polyp were 39% and 65%, respectively. Whereas for hysteroscopy, it was 89% and 20%, respectively. The prevalence of malignant and premalignant lesions were 1.85% and 7.4%, respectively. Conclusion: This study found that obesity, hypertension, and diabetes mellitus were not associated with the presence of uterine polyps. There was no association between thick endometrium and uterine polyps. The sensitivity is higher for hysteroscopy, whereas the specificity is higher for sonography in detecting uterine polyps. The prevalence of malignancy was very low in uterine polyps.

Keywords: endometrial polyps, hysteroscopy, ultrasonography, premalignant, malignant

Procedia PDF Downloads 125
3893 Potential Energy Expectation Value for Lithium Excited State (1s2s3s)

Authors: Khalil H. Al-Bayati, G. Nasma, Hussein Ban H. Adel

Abstract:

The purpose of the present work is to calculate the expectation value of potential energy for different spin states (ααα ≡ βββ, αβα ≡ βαβ) and compare it with spin states (αββ, ααβ ) for lithium excited state (1s2s3s) and Li-like ions (Be+, B+2) using Hartree-Fock wave function by partitioning technique. The result of inter particle expectation value shows linear behaviour with atomic number and for each atom and ion the shows the trend ααα < ααβ < αββ < αβα.

Keywords: lithium excited state, potential energy, 1s2s3s, mathematical physics

Procedia PDF Downloads 477
3892 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 363
3891 Thermal Behavior of a Ventilated Façade Using Perforated Ceramic Bricks

Authors: Helena López-Moreno, Antoni Rodríguez-Sánchez, Carmen Viñas-Arrebola, Cesar Porras-Amores

Abstract:

The ventilated façade has great advantages when compared to traditional façades as it reduces the air conditioning thermal loads due to the stack effect induced by solar radiation in the air chamber. Optimizing energy consumption by using a ventilated façade can be used not only in newly built buildings but also it can be implemented in existing buildings, opening the field of implementation to energy building retrofitting works. In this sense, the following three prototypes of façade where designed, built and further analyzed in this research: non-ventilated façade (NVF); slightly ventilated façade (SLVF) and strongly ventilated façade (STVF). The construction characteristics of the three facades are based on the Spanish regulation of building construction “Technical Building Code”. The façades have been monitored by type-k thermocouples in a representative day of the summer season in Madrid (Spain). Moreover, an analysis of variance (ANOVA) with repeated measures, studying the thermal lag in the ventilated and no-ventilated façades has been designed. Results show that STVF façade presents higher levels of thermal inertia as the thermal lag reduces up to 100% (daily mean) compared to the non-ventilated façade. In addition, the statistical analysis proves that an increase of the ventilation holes size in STVF façades does not improve the thermal lag significantly (p > 0.05) when compared to the SLVF façade.

Keywords: ventilated façade, energy efficiency, thermal behavior, statistical analysis

Procedia PDF Downloads 477
3890 Group Sequential Covariate-Adjusted Response Adaptive Designs for Survival Outcomes

Authors: Yaxian Chen, Yeonhee Park

Abstract:

Driven by evolving FDA recommendations, modern clinical trials demand innovative designs that strike a balance between statistical rigor and ethical considerations. Covariate-adjusted response-adaptive (CARA) designs bridge this gap by utilizing patient attributes and responses to skew treatment allocation in favor of the treatment that is best for an individual patient’s profile. However, existing CARA designs for survival outcomes often hinge on specific parametric models, constraining their applicability in clinical practice. In this article, we address this limitation by introducing a CARA design for survival outcomes (CARAS) based on the Cox model and a variance estimator. This method addresses issues of model misspecification and enhances the flexibility of the design. We also propose a group sequential overlapweighted log-rank test to preserve type I error rate in the context of group sequential trials using extensive simulation studies to demonstrate the clinical benefit, statistical efficiency, and robustness to model misspecification of the proposed method compared to traditional randomized controlled trial designs and response-adaptive randomization designs.

Keywords: cox model, log-rank test, optimal allocation ratio, overlap weight, survival outcome

Procedia PDF Downloads 53
3889 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes

Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini

Abstract:

Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.

Keywords: modelling, Monte Carlo simulations, probabilistic models, data clustering, reinforced concrete members, structural design

Procedia PDF Downloads 463
3888 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring

Procedia PDF Downloads 144
3887 Computer Software for Calculating Electron Mobility of Semiconductors Compounds; Case Study for N-Gan

Authors: Emad A. Ahmed

Abstract:

Computer software to calculate electron mobility with respect to different scattering mechanism has been developed. This software is adopted completely Graphical User Interface (GUI) technique and its interface has been designed by Microsoft Visual Basic 6.0. As a case study the electron mobility of n-GaN was performed using this software. The behaviour of the mobility for n-GaN due to elastic scattering processes and its relation to temperature and doping concentration were discussed. The results agree with other available theoretical and experimental data.

Keywords: electron mobility, relaxation time, GaN, scattering, computer software, computation physics

Procedia PDF Downloads 657
3886 Cognitions of Physical Education Supervisors and Teachers for Conceptions of Effective Teaching Related to the Concerns Theory

Authors: Ali M. Alsagheir

Abstract:

Effective teaching is concerned to be one of the research fields of teaching, and its fundamental case is to reach the most successful ways that makes teaching fruitful. Undoubtedly, these methods are common factors between all parties who are concerned with the educational process such as instructors, directors, parents, and others. This study had aimed to recognize the cognitions of physical education supervisors and teachers for conceptions of effective teaching according to the interests theory. A questionnaire was used to collect data of the study; the sample contained 230 teachers and supervisors.The results were ended in: that the average of conceptions of effective teaching expressions for the sample of the study decreases at the progress through stages of teaching development in general. The study showed the absence of statistical indicator between teachers and supervisors at the core of both teaching principals and teaching tasks although the results showed that there are statistical indicators at the core of teaching achievements between supervisors and teachers in favor of supervisors. The study ended in to recommendations which can share in increasing the effectiveness of teaching such as: putting clear and specific standards for the effectiveness of teaching in which teacher's performance is based, constructing practical courses that focus on bringing on both supervisors and teachers with skills and strategies of effectiveness teaching, taking care of children achievement as an important factor and a strong indicator on effectiveness of teaching and learning.

Keywords: concerns theory, effective teaching, physical education, supervisors, teachers

Procedia PDF Downloads 399
3885 Exploring Fertility Dynamics in the MENA Region: Distribution, Determinants, and Temporal Trends

Authors: Dena Alhaloul

Abstract:

The Middle East and North Africa (MENA) region is characterized by diverse cultures, economies, and social structures. Fertility rates in MENA have seen significant changes over time, with variations among countries and subregions. Understanding fertility patterns in this region is essential due to its impact on demographic dynamics, healthcare, labor markets, and social policies. Rising or declining fertility rates have far-reaching consequences for the region's socioeconomic development. The main thrust of this study is to comprehensively examine fertility rates in the Middle East and North Africa (MENA) region. It aims to understand the distribution, determinants, and temporal trends of fertility rates in MENA countries. The study seeks to provide insights into the factors influencing fertility decisions, assess how fertility rates have evolved over time, and potentially develop statistical models to characterize these trends. As for the methodology of the study, the study uses descriptive statistics to summarize and visualize fertility rate data. It also uses regression analyses to identify determinants of fertility rates as well as statistical modeling to characterize temporal trends in fertility rates. The conclusion of this study The research will contribute to a deeper understanding of fertility dynamics in the MENA region, shedding light on the distribution of fertility rates, their determinants, and historical trends.

Keywords: fertility, distribution, modeling, regression

Procedia PDF Downloads 64
3884 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.

Keywords: children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity

Procedia PDF Downloads 119
3883 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 18
3882 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions

Authors: Chaitanya Varma, Arpan Mehar

Abstract:

The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.

Keywords: highway, mixed traffic flow, modeling, operating speed

Procedia PDF Downloads 456
3881 Role of DatScan in the Diagnosis of Parkinson's Disease

Authors: Shraddha Gopal, Jayam Lazarus

Abstract:

Aims: To study the referral practice and impact of DAT-scan in the diagnosis or exclusion of Parkinson’s disease. Settings and Designs: A retrospective study Materials and methods: A retrospective study of the results of 60 patients who were referred for a DAT scan over a period of 2 years from the Department of Neurology at Northern Lincolnshire and Goole NHS trust. The reason for DAT scan referral was noted under 5 categories against Parkinson’s disease; drug-induced Parkinson’s, essential tremors, diagnostic dilemma, not responding to Parkinson’s treatment, and others. We assessed the number of patients who were diagnosed with Parkinson’s disease against the number of patients in whom Parkinson’s disease was excluded or an alternative diagnosis was made. Statistical methods: Microsoft Excel was used for data collection and statistical analysis, Results: 30 of the 60 scans were performed to confirm the diagnosis of early Parkinson’s disease, 13 were done to differentiate essential tremors from Parkinsonism, 6 were performed to exclude drug-induced Parkinsonism, 5 were done to look for alternative diagnosis as the patients were not responding to anti-Parkinson medication and 6 indications were outside the recommended guidelines. 55% of cases were confirmed with a diagnosis of Parkinson’s disease. 43.33% had Parkinson’s disease excluded. 33 of the 60 scans showed bilateral abnormalities and confirmed the clinical diagnosis of Parkinson’s disease. Conclusion: DAT scan provides valuable information in confirming Parkinson’s disease in 55% of patients along with excluding the diagnosis in 43.33% of patients aiding an alternative diagnosis.

Keywords: DATSCAN, Parkinson's disease, diagnosis, essential tremors

Procedia PDF Downloads 214
3880 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 139
3879 Comparision of Statistical Variables for Vaccinated and Unvaccinated Children in Measles Cases in Khyber Pukhtun Khwa

Authors: Inayatullah Khan, Afzal Khan, Hamzullah Khan, Afzal Khan

Abstract:

Objectives: The objective of this study was to compare different statistical variables for vaccinated and unvaccinated children in measles cases. Material and Methods: This cross sectional comparative study was conducted at Isolation ward, Department of Paediatrics, Lady Reading Hospital (LRH), Peshawar, from April 2012 to March 2013. A total of 566 admitted cases of measles were enrolled. Data regarding age, sex, address, vaccination status, measles contact, hospital stay and outcome was collected and recorded on a proforma. History of measles vaccination was ascertained either by checking the vaccination cards or on parental recall. Result: In 566 cases of measles, 211(39%) were vaccinated and 345 (61%) were unvaccinated. Three hundred and ten (54.80%) patients were males and 256 (45.20%) were females with a male to female ratio of 1.2:1.The age range was from 1 year to 14 years with mean age with SD of 3.2 +2 years. Majority (371, 65.5%) of the patients were 1-3 years old. Mean hospital stay was 3.08 days with a range of 1-10 days and a standard deviation of ± 1.15. History of measles contact was present in 393 (69.4%) cases. Fourty eight patients were expired with a mortality rate of 8.5%. Conclusion: Majority of the children in Khyber Pukhtunkhwa are unvaccinated and unprotected against measles. Among vaccinated children, 39% of children attracted measles which indicate measles vaccine failure. This figure is clearly higher than that accepted for measles vaccine (2-10%).

Keywords: measles, vaccination, immunity, population

Procedia PDF Downloads 433
3878 Transformation of Health Communication Literacy in Information Technology during Pandemic in 2019-2022

Authors: K. Y. S. Putri, Heri Fathurahman, Yuki Surisita, Widi Sagita, Kiki Dwi Arviani

Abstract:

Society needs the assistance of academics in understanding and being skilled in health communication literacy. Information technology runs very fast while health communication literacy skills in getting health communication information during the pandemic are not as fast as the development of information technology. The research question is whether there is an influence of health communication on information technology in health information during the pandemic in Indonesia. The purpose of the study is to find out the influence of health communication on information technology in health information during the pandemic in Indonesia. The concepts of health communication literacy and information technology are used this study. Previous research is in support of this study. Quantitative research methods by disseminating questionnaires in this study. The validity and reliability test of this study is positive, so it can proceed to the next statistical analysis. Descriptive results of variable health communication literacy are of positive value in all dimensions. All dimensions of information technology are of positive value. Statistical tests of the influence of health communication literacy on information technology are of great value. Discussion of both variables in the influence of health communication literacy and high-value information technology because health communication literacy has a high effect in information technology. Respondents to this study have high information technology skills. So that health communication literacy in obtaining health information during the 2019-2022 pandemic is needed. Research advice is that academics are still very much needed by the community in the development of society during the pandemic.

Keywords: health information, health information needs, literacy health communication, information technology

Procedia PDF Downloads 127
3877 Examining the Attitudes of Pre-School Teachers towards Values Education in Terms of Gender, School Type, Professional Seniority and Location

Authors: Hatice Karakoyun, Mustafa Akdag

Abstract:

This study has been made to examine the attitudes of pre-school teachers towards values education. The study has been made as a general scanning model. The study’s working group contains 108 pre-school teachers who worked in Diyarbakır, Turkey. In this study Values Education Attitude Scale (VEAS), which developed by Yaşaroğlu (2014), was used. In order to analyze the data for sociodemographic structure, percentage and frequency values were examined. The Kolmogorov-Smirnov method was used in determination of the normal distribution of data. During analyzing the data, KolmogorovSimirnov test and the normal curved histograms were examined to determine which statistical analyzes would be applied on the scale and it was found that the distribution was not normal. Thus, the Mann Whitney U analysis technique which is one of the nonparametric statistical analysis techniques were used to test the difference of the scores obtained from the scale in terms of independent variables. According to the analyses, it seems that pre-school teachers’ attitudes toward values education are positive. According to the scale with the highest average, it points out that pre-school teachers think that values education is very important for students’ and children’s future. The variables included in the scale (gender, seniority, age group, education, school type, school place) seem to have no effect on the pre-school teachers’ attitude grades which joined to the study.

Keywords: attitude scale, pedagogy, pre-school teacher, values education

Procedia PDF Downloads 236
3876 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 117
3875 Simulation of Reflectometry in Alborz Tokamak

Authors: S. Kohestani, R. Amrollahi, P. Daryabor

Abstract:

Microwave diagnostics such as reflectometry are receiving growing attention in magnetic confinement fusionresearch. In order to obtain the better understanding of plasma confinement physics, more detailed measurements on density profile and its fluctuations might be required. A 2D full-wave simulation of ordinary mode propagation has been written in an effort to model effects seen in reflectometry experiment. The code uses the finite-difference-time-domain method with a perfectly-matched-layer absorption boundary to solve Maxwell’s equations.The code has been used to simulate the reflectometer measurement in Alborz Tokamak.

Keywords: reflectometry, simulation, ordinary mode, tokamak

Procedia PDF Downloads 413
3874 Blue Eyes and Blonde Hair in Mass Media: A News Discourse Analysis of Western Media on the News Coverage of Ukraine

Authors: Zahra Mehrabbeygi

Abstract:

This research is opted to analyze and survey discourse variety and news image-making in western media regarding the news coverage of the Russian army intrusion into Ukraine. This research will be done on the news coverage of Ukraine in a period from February 2022 to May 2022 in five western media, "BBC, CBS, NBC, Al Jazeera, and Telegraph." This research attempts to discover some facts about the news policies of the five western news agencies during the circumstances of the Ukraine-Russia war. Critical theories in the news, such as Framing, Media Imperialism of News, Image Making, Discourse, and Ideology, were applied to achieve this goal. The research methodology uses Van Dijk's discourse exploration method based on discourse analysis. The research's statistical population is related to all the news about racial discrimination during the mentioned period. After a statistical population survey with Targeted Sampling, the researcher randomly selected ten news cases for exploration. The research findings show that the western media have similarities in their texts via lexical items, polarization, citations, persons, and institutions. The research findings also imply pre-suppositions, connotations, and components of consensus agreement and underlying predicates in the outset, middle, and end events. The reaction of some western media not only shows their bewilderment but also exposes their prejudices rooted in racism.

Keywords: news discourse analysis, western media, racial discrimination, Ukraine-Russia war

Procedia PDF Downloads 90
3873 The Grand Unified Theory of Everything as a Generalization to the Standard Model Called as the General Standard Model

Authors: Amir Deljoo

Abstract:

The endeavor to comprehend the existence have been the center of thought for human in form of different disciplines and now basically in physics as the theory of everything. Here, after a brief review of the basic frameworks of thought, and a history of thought since ancient up to present, a logical methodology is presented based on a core axiom after which a function, a proto-field and then a coordinates are explained. Afterwards a generalization to Standard Model is proposed as General Standard Model which is believed to be the base of the Unified Theory of Everything.

Keywords: general relativity, grand unified theory, quantum mechanics, standard model, theory of everything

Procedia PDF Downloads 91
3872 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System

Authors: Hassan Qandil

Abstract:

Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.

Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar

Procedia PDF Downloads 142
3871 Investigating the Relationship between Place Attachment and Sustainable Development of Urban Spaces

Authors: Hamid Reza Zeraatpisheh, Ali Akbar Heidari, Soleiman Mohammadi Doust

Abstract:

This study has examined the relationship between place attachment and sustainable development of urban spaces. To perform this, the components of place identity, emotional attachment, place attachment and social bonding which totally constitute the output of place attachment, by means of the standardized questionnaire measure place attachment in three domains of (cognitive) the place identity, (affective) emotional attachment and (behavioral) place attachment and social bonding. To measure sustainable development, three components of sustainable development, including society, economy and environment has been considered. The study is descriptive. The assessment instrument is the standard questionnaire of Safarnia which has been used to measure the variable of place attachment and to measure the variable of sustainable development, a questionnaire has been made by the researcher and been based on the combined theoretical framework. The statistical population of this research has been the city of Shiraz. The statistical sample has been Hafeziyeh. SPSS software has been used to analyze the data and examined the results of both descriptive and inferential statistics. In inferential statistics, Pearson correlation coefficient has been used to examine the hypotheses. In this study, the variable of place attachment is high and sustainable development is also in a high level. These results suggest a positive relationship between attachment to place and sustainable development.

Keywords: place attachment, sustainable development, economy-society-environment, Hafez's tomb

Procedia PDF Downloads 692
3870 Quantitative Structure–Activity Relationship Analysis of Some Benzimidazole Derivatives by Linear Multivariate Method

Authors: Strahinja Z. Kovačević, Lidija R. Jevrić, Sanja O. Podunavac Kuzmanović

Abstract:

The relationship between antibacterial activity of eighteen different substituted benzimidazole derivatives and their molecular characteristics was studied using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on inhibitory activity towards Staphylococcus aureus, by using molecular descriptors, as well as minimal inhibitory activity (MIC). Molecular descriptors were calculated from the optimized structures. Principal component analysis (PCA) followed by hierarchical cluster analysis (HCA) and multiple linear regression (MLR) was performed in order to select molecular descriptors that best describe the antibacterial behavior of the compounds investigated, and to determine the similarities between molecules. The HCA grouped the molecules in separated clusters which have the similar inhibitory activity. PCA showed very similar classification of molecules as the HCA, and displayed which descriptors contribute to that classification. MLR equations, that represent MIC as a function of the in silico molecular descriptors were established. The statistical significance of the estimated models was confirmed by standard statistical measures and cross-validation parameters (SD = 0.0816, F = 46.27, R = 0.9791, R2CV = 0.8266, R2adj = 0.9379, PRESS = 0.1116). These parameters indicate the possibility of application of the established chemometric models in prediction of the antibacterial behaviour of studied derivatives and structurally very similar compounds.

Keywords: antibacterial, benzimidazole, molecular descriptors, QSAR

Procedia PDF Downloads 357
3869 The 1st Personal Pronouns as Evasive Devices in the 2016 Taiwanese Presidential Debate

Authors: Yan-Chi Chen

Abstract:

This study aims to investigate the 1st personal pronouns as evasive devices used by presidential candidates in the 2016 Taiwanese Presidential Debate within the framework of critical discourse analysis (CDA). This study finds that the personal pronoun ‘I’ is the highest frequent personal pronoun in the 2016 Taiwanese Presidential Debate. Generally speaking, the first personal pronouns were used most in the presidential debate, compared with the second and the third personal pronouns. Hence, a further quantitative analysis is conducted to explore the correlation between the frequencies of the two 1st personal pronouns and the other pronouns. Results show that the number of the personal pronoun ‘I’ increases from 26 to 49, with the personal pronoun ‘we’ decreases from 43 to 15 during the debate. Though it seems the personal pronoun ‘I’ has a higher tendency in pronominal choice, statistical evidence demonstrated that the personal pronoun ‘we’ has the greater statistical significance (p<0.0002), compared with that of ‘I’ (p<0.0116). The comparatively small p-value of the personal pronoun ‘we’ means it ‘has a stronger correlation with the overall pronominal choice, and the personal pronoun ‘we’ is more likely to be used than the personal pronoun ‘I’. Therefore, this study concludes that the pronominal choice varies with different evasive strategies. The ingrained functions of these personal pronouns are mainly categorized as ‘agreement’ and ‘justification’. The personal pronoun ’we’ is preferred in the agreement evasive strategies, and ‘I’ is used for justifying oneself. In addition, the personal pronoun ‘we’ can be defined as both ‘inclusive’ and ‘exclusive’ personal pronoun, which rendered ‘we’ more functions not limited to agreement evasive strategies. In conclusion, although the personal pronoun ‘I’ has the highest occurrences, the personal pronoun ‘we’ is more related to the first pronoun choices.

Keywords: critical discourse analysis (CDA), evasive devices, the 1st personal pronouns, the 2016 Taiwanese Presidential Debate

Procedia PDF Downloads 155
3868 Detecting Earnings Management via Statistical and Neural Networks Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange

Procedia PDF Downloads 414