Search results for: transverse flux PM linear machine
611 Investigation of the Effects of Aerobic Exercise Programs on Hematological Parameters of Sedentary People
Authors: Sanjeev Kumar, Swati Choudhary
Abstract:
Background: A variety of studies warn that sedentary lifestyles can contribute to many preventable causes of death. This study was taken to determine the effects of two types of aerobic training programs on erythrocytes, leukocytes, hemoglobin concentration (Hb), platelets and hematocrit of sedentary people (N=60) with age group 20 to 30 years. Methods: All the subjects were randomly divided into three groups i.e. two experiments groups (aerobic dance & cardio fitness) and control group. Each group having 10 male and 10 females. Experimental groups undergone 60 minutes of training 5 times a week for 12 weeks whereas the control group did not participate in any training program except their daily routine. The aerobic dance group was chosen to perform exercise like step –touch, side-to-side, V-step and hand and body movements, etc. The cardio fitness group was chosen to perform exercises with modern fitness equipment like treadmill, elliptical trainer, stationary bike and rowing machine. Rating of perceived exertion (RPE) scale developed by Gunner Borg was used to monitor the intensity of the workout. Aerobic programs were encompassed of low-impact (0- 4 week & perceived exertion from 6 to 12), moderate-impact (4-8 week and perceived exertion from 12 to 16) and high-impact (8- 12 week & perceived exertion from 16 to 20). Results: To test the effectiveness of training programs paired t-test was used and significant difference (p<0.05) was observed in erythrocytes, hemoglobin concentration, platelets, hematocrit but no significant effects of training was found in leukocytes (p>0.05). Paired t-test also showed that no effect of time was seen in the control group in all the cases (p>0.05). Further analysis of covariance was used to know which program was more effective and it was seen that F value was found significant in the case of erythrocytes, hemoglobin concentration, platelets, and hematocrit as their associated p-value (p<0.05) is lesser than 0.05. As F value was found significant for hematological parameters, fishers least significant difference test was used and results of post hoc mean comparison indicated that experimental groups (aerobic dance group and cardio fitness group) had significant difference with control group in erythrocytes, hemoglobin concentration, platelets and hematocrit and insignificant difference was found between aerobic dance group & cardio fitness group in all the cases. Thus, it may be concluded that in general, both the aerobic training programs had adequate effects on all the hematological parameters except leukocytes.Keywords: aerobic dance, cardio fitness, hematological variables, rating perceived exertion scale
Procedia PDF Downloads 272610 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques
Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh
Abstract:
In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network
Procedia PDF Downloads 71609 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 334608 Management of Permits and Regulatory Compliance Obligations for the East African Crude Oil Pipeline Project
Authors: Ezra Kavana
Abstract:
This article analyses the role those East African countries play in enforcing crude oil pipeline regulations. The paper finds that countries are more likely to have responsibility for enforcing these regulations if they have larger networks of gathering and transmission lines and if their citizens are more liberal and more pro-environment., Pipeline operations, transportation costs, new pipeline construction, and environmental effects are all heavily controlled. All facets of pipeline systems and the facilities connected to them are governed by statutory bodies. In order to support the project manager on such new pipeline projects, companies building and running these pipelines typically include personnel and consultants who specialize in these permitting processes. The primary permissions that can be necessary for pipelines carrying different commodities are mentioned in this paper. National, regional, and local municipalities each have their own permits. Through their right-of-way group, the contractor's project compliance leadership is typically directly responsible for obtaining those permits, which are typically obtained through government agencies. The whole list of local permits needed for a planned pipeline can only be found after a careful field investigation. A country's government regulates pipelines that are entirely within its borders. With a few exceptions, state regulations governing ratemaking and safety have been enacted to be consistent with regulatory requirements. Countries that produce a lot of energy are typically more involved in regulating pipelines than countries that produce little to no energy. To identify the proper regulatory authority, it is important to research the several government agencies that regulate pipeline transportation. Additionally, it's crucial that the scope determination of a planned project engage with a various external professional with experience in linear facilities or the company's pipeline construction and environmental professional to identify and obtain any necessary design clearances, permits, or approvals. These professionals can offer precise estimations of the costs and length of time needed to process necessary permits. Governments with a stronger energy sector, on the other hand, are less likely to take on control. However, the performance of the pipeline or national enforcement activities are unaffected significantly by whether a government has taken on control. Financial fines are the most efficient government enforcement instrument because they greatly reduce occurrences and property damage.Keywords: crude oil, pipeline, regulatory compliance, and construction permits
Procedia PDF Downloads 96607 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 221606 Television, Internet, and Internet Social Media Direct-To-Consumer Prescription Medication Advertisements: Intention and Behavior to Seek Additional Prescription Medication Information
Authors: Joshua Fogel, Rivka Herzog
Abstract:
Although direct-to-consumer prescription medication advertisements (DTCA) are viewed or heard in many venues, there does not appear to be any research for internet social media DTCA. We study the association of traditional media DTCA and digital media DTCA including internet social media of YouTube, Facebook, and Twitter with three different outcomes. There was one intentions outcome and two different behavior outcomes. The intentions outcome was the agreement level for seeking additional information about a prescription medication after seeing a DTCA. One behavior outcome was the agreement level for obtaining additional information about a prescription medication after seeing a DTCA. The other behavior outcome was the frequency level for obtaining additional information about a prescription medication after seeing a DTCA. Surveys were completed by 635 college students. Predictors included demographic variables, theory of planned behavior variables, health variables, and advertisements seen or heard. Also, in the behavior analyses, additional predictors of intentions and sources for seeking additional prescription drug information were included. Multivariate linear regression analyses were conducted. We found that increased age was associated with increased behavior, women were associated with increased intentions, and Hispanic race/ethnicity was associated with decreased behavior. For the theory of planned behavior variables, increased attitudes were associated with increased intentions, increased social norms were associated with increased intentions and behavior, and increased intentions were associated with increased behavior. Very good perceived health was associated with increased intentions. Advertisements seen in spam mail were associated with decreased intentions. Advertisements seen on traditional or cable television were associated with decreased behavior. Advertisements seen on television watched on the internet were associated with increased behavior. The source of seeking additional information of reading internet print content was associated with increased behavior. No internet social media advertisements were associated with either intentions or behavior. In conclusion, pharmaceutical brand managers and marketers should consider these findings when tailoring their DTCA advertising campaigns and directing their DTCA advertising budget towards young adults such as college students. They need to reconsider the current approach for traditional television DTCA and also consider dedicating a larger advertising budget toward internet television DTCA. Although internet social media is a popular place to advertise, the financial expenditures do not appear worthwhile for DTCA when targeting young adults such as college students.Keywords: brand managers, direct-to-consumer advertising, internet, social media
Procedia PDF Downloads 265605 An Experimental Study of Scalar Implicature Processing in Chinese
Authors: Liu Si, Wang Chunmei, Liu Huangmei
Abstract:
A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language
Procedia PDF Downloads 361604 Gut Microbiota in Patients with Opioid Use Disorder: A 12-week Follow up Study
Authors: Sheng-Yu Lee
Abstract:
Aim: Opioid use disorder is often characterized by repetitive drug-seeking and drug-taking behaviors with severe public health consequences. Animal model showed that opioid-induced perturbations in the gut microbiota causally relate to neuroinflammation, deficits in reward responding, and opioid tolerance, possibly due to changes in gut microbiota. Therefore, we propose that the dysbiosis of gut microbiota can be associated with pathogenesis of opioid dependence. In this current study, we explored the differences in gut microbiota between patients and normal controls and in patients before and after initiation of methadone treatment program for 12 weeks. Methods: Patients with opioid use disorder between 20 and 65 years were recruited from the methadone maintenance outpatient clinic in 2 medical centers in the Southern Taiwan. Healthy controls without any family history of major psychiatric disorders (schizophrenia, bipolar disorder and major depressive disorder) were recruited from the community. After initial screening, 15 patients with opioid use disorder joined the study for initial evaluation (Week 0), 12 of them completed the 12-week follow-up while receiving methadone treatment and ceased heroin use (Week 12). Fecal samples were collected from the patients at baseline and the end of 12th week. A one-time fecal sample was collected from the healthy controls. The microbiota of fecal samples were investigated using 16S rRNA V3V4 amplicon sequencing, followed by bioinformatics and statistical analyses. Results: We found no significant differences in species diversity in opioid dependent patients between Week 0 and Week 12, nor compared between patients at both points and controls. For beta diversity, using principal component analysis, we found no significant differences between patients at Week 0 and Week 12, however, both patient groups showed significant differences compared to control (P=0.011). Furthermore, the linear discriminant analysis effect size (LEfSe) analysis was used to identify differentially enriched bacteria between opioid use patients and healthy controls. Compared to controls, the relative abundance of Lactobacillaceae Lactobacillus (L. Lactobacillus), Megasphaera Megasphaerahexanoica (M. Megasphaerahexanoica) and Caecibacter Caecibactermassiliensis (C Caecibactermassiliensis) were increased in patients at Week 0, while Coriobacteriales Atopobiaceae (C. Atopobiaceae), Acidaminococcus Acidaminococcusintestini (A. Acidaminococcusintestini) and Tractidigestivibacter Tractidigestivibacterscatoligenes (T. Tractidigestivibacterscatoligenes) were increased in patients at Week 12. Conclusion: In conclusion, we suggest that the gut microbiome community maybe linked to opioid use disorder, such differences may not be altered even after 12-week of cessation of opioid use.Keywords: opioid use disorder, gut microbiota, methadone treatment, follow up study
Procedia PDF Downloads 106603 Postmortem Genetic Testing to Sudden and Unexpected Deaths Using the Next Generation Sequencing
Authors: Eriko Ochiai, Fumiko Satoh, Keiko Miyashita, Yu Kakimoto, Motoki Osawa
Abstract:
Sudden and unexpected deaths from unknown causes occur in infants and youths. Recently, molecular links between a part of these deaths and several genetic diseases are examined in the postmortem. For instance, hereditary long QT syndrome and Burgada syndrome are occasionally fatal through critical ventricular tachyarrhythmia. There are a large number of target genes responsible for such diseases, the conventional analysis using the Sanger’s method has been laborious. In this report, we attempted to analyze sudden deaths comprehensively using the next generation sequencing (NGS) technique. Multiplex PCR to subject’s DNA was performed using Ion AmpliSeq Library Kits 2.0 and Ion AmpliSeq Inherited Disease Panel (Life Technologies). After the library was constructed by emulsion PCR, the amplicons were sequenced 500 flows on Ion Personal Genome Machine System (Life Technologies) according to the manufacture instruction. SNPs and indels were analyzed to the sequence reads that were mapped on hg19 of reference sequences. This project has been approved by the ethical committee of Tokai University School of Medicine. As a representative case, the molecular analysis to a 40 years old male who received a diagnosis of Brugada syndrome demonstrated a total of 584 SNPs or indels. Non-synonymous and frameshift nucleotide substitutions were selected in the coding region of heart disease related genes of ANK2, AKAP9, CACNA1C, DSC2, KCNQ1, MYLK, SCN1B, and STARD3. In particular, c.629T-C transition in exon 3 of the SCN1B gene, resulting in a leu210-to-pro (L210P) substitution is predicted “damaging” by the SIFT program. Because the mutation has not been reported, it was unclear if the substitution was pathogenic. Sudden death that failed in determining the cause of death constitutes one of the most important unsolved subjects in forensic pathology. The Ion AmpliSeq Inherited Disease Panel can amplify the exons of 328 genes at one time. We realized the difficulty in selection of the true source from a number of candidates, but postmortem genetic testing using NGS analysis deserves of a diagnostic to date. We now extend this analysis to SIDS suspected subjects and young sudden death victims.Keywords: postmortem genetic testing, sudden death, SIDS, next generation sequencing
Procedia PDF Downloads 358602 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism
Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape
Abstract:
Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders
Procedia PDF Downloads 23601 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment
Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali
Abstract:
Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells
Procedia PDF Downloads 94600 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 214599 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 57598 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly
Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale
Abstract:
Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.Keywords: engine mounts, finite elements analysis, strain gauge, stress
Procedia PDF Downloads 481597 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 227596 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 80595 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems
Authors: Joachim F. Sartor
Abstract:
According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage
Procedia PDF Downloads 151594 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams
Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin
Abstract:
Fire incidents have been steadily increased over the last year according to national emergency management agency of South Korea. Even though most of the fire incidents with property damage have been occurred in building, rehabilitation has not been properly done with consideration of structure safety. Therefore, this study aims at evaluating rehabilitation effects on fire damaged normal strength concrete beams through experiments and finite element analyses. For the experiments, reinforced concrete beams were fabricated having designed concrete strength of 21 MPa. Two different cover thicknesses were used as 40 mm and 50 mm. After cured, the fabricated beams were heated for 1hour or 2hours according to ISO-834 standard time-temperature curve. Rehabilitation was done by removing the damaged part of cover thickness and filling polymeric mortar into the removed part. Both fire damaged beams and rehabilitated beams were tested with four point loading system to observe structural behaviors and the rehabilitation effect. To verify the experiment, finite element (FE) models for structural analysis were generated using commercial software ABAQUS 6.10-3. For the rehabilitated beam models, integrated temperature-structural analyses were performed in advance to obtain geometries of the fire damaged beams. In addition to the fire damaged beam models, rehabilitated part was added with material properties of polymeric mortar. Three dimensional continuum brick elements were used for both temperature and structural analyses. The same loading and boundary conditions as experiments were implemented to the rehabilitated beam models and non-linear geometrical analyses were performed. Test results showed that maximum loads of the rehabilitated beams were 8~10% higher than those of the non-rehabilitated beams and even 1~6 % higher than those of the non-fire damaged beam. Stiffness of the rehabilitated beams were also larger than that of non-rehabilitated beams but smaller than that of the non-fire damaged beams. In addition, predicted structural behaviors from the analyses also showed good rehabilitation effect and the predicted load-deflection curves were similar to the experimental results. From this study, both experiments and analytical results demonstrated good rehabilitation effect on the fire damaged normal strength concrete beams. For the further, the proposed analytical method can be used to predict structural behaviors of rehabilitated and fire damaged concrete beams accurately without suffering from time and cost consuming experimental process.Keywords: fire, normal strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 508593 Experimental Investigation of Cutting Forces and Temperature in Bone Drilling
Authors: Vishwanath Mali, Hemant Warhatkar, Raju Pawade
Abstract:
Drilling of bone has been always challenging for surgeons due to the adverse effect it may impart to bone tissues. Force has to be applied manually by the surgeon while performing conventional bone drilling which may lead to permanent death of bone tissues and nerves. During bone drilling the temperature of the bone tissues increases to higher values above 47 ⁰C that causes thermal osteonecrosis resulting into screw loosening and subsequent implant failures. An attempt has been made here to study the input drilling parameters and surgical drill bit geometry affecting bone health during bone drilling. A One Factor At a Time (OFAT) method is used to plan the experiments. Input drilling parameters studied include spindle speed and feed rate. The drill bit geometry parameter studied include point angle and helix angle. The output variables are drilling thrust force and bone temperature. The experiments were conducted on goat femur bone at room temperature 30 ⁰C. For measurement of thrust forces KISTLER cutting force dynamometer Type 9257BA was used. For continuous data acquisition of temperature NI LabVIEW software was used. Fixture was made on RPT machine for holding the bone specimen while performing drilling operation. Bone specimen were preserved in deep freezer (LABTOP make) under -40 ⁰C. In case of drilling parameters, it is observed that at constant feed rate when spindle speed increases, thrust force as well as temperature decreases and at constant spindle speed when feed rate increases thrust force as well as temperature increases. The effect of drill bit geometry shows that at constant helix angle when point angle increases thrust force as well as temperature increases and at constant point angle when helix angle increase thrust force as well as temperature decreases. Hence it is concluded that as the thrust force increases temperature increases. In case of drilling parameter, the lowest thrust force and temperature i.e. 35.55 N and 36.04 ⁰C respectively were recorded at spindle speed 2000 rpm and feed rate 0.04 mm/rev. In case of drill bit geometry parameter, the lowest thrust force and temperature i.e. 40.81 N and 34 ⁰C respectively were recorded at point angle 70⁰ and helix angle 25⁰ Hence to avoid thermal necrosis of bone it is recommended to use higher spindle speed, lower feed rate, low point angle and high helix angle. The hard nature of cortical bone contributes to a greater rise in temperature whereas a considerable drop in temperature is observed during cancellous bone drilling.Keywords: bone drilling, helix angle, point angle, thrust force, temperature, thermal necrosis
Procedia PDF Downloads 309592 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit
Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin
Abstract:
Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.Keywords: cloning free, time machine, teleportation, two-level system
Procedia PDF Downloads 74591 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics
Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García
Abstract:
Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics
Procedia PDF Downloads 299590 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Authors: Mohammad Ghahramani, Fahimeh Saei Manesh
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.Keywords: soccer, analytics, machine learning, database
Procedia PDF Downloads 238589 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 104588 Life Satisfaction of Non-Luxembourgish and Native Luxembourgish Postgraduate Students
Authors: Chrysoula Karathanasi, Senad Karavdic, Angela Odero, Michèle Baumann
Abstract:
It is not only the economic determinants that impact on life conditions, but maintaining a good level of life satisfaction (LS) may also be an important challenge currently. In Luxembourg, university students receive financial aid from the government. They are then registered at the Centre for Documentation and Information on Higher Education (CEDIES). Luxembourg is built on migration with almost half its population consisting of foreigners. It is upon this basis that our research aims to analyze the associations with mental health factors (health satisfaction, psychological quality of life, worry), perceived financial situation, career attitudes (adaptability, optimism, knowledge, planning) and LS, for non-Luxembourgish and native postgraduate students. Between 2012 and 2013, postgraduates registered at CEDIES were contacted by post and asked to participate in an online survey with either the option of English or French. The study population comprised of 644 respondents. Our statistical analysis excluded: those born abroad who had Luxembourgish citizenship, or those born in Luxembourg who did not have citizenship. Two groups were formed one consisting 147 non-Luxembourgish and the other 284 natives. A single item measured LS (1=not at all satisfied to 10=very satisfied). Bivariate tests, correlations and multiple linear regression models were used in which only significant relationships (p<0.05) were integrated. Among the two groups no differences were found between LS indicators (7.8/10 non-Luxembourgish; 8.0/10 natives) as both were higher than the European indicator of 7.2/10 (for 25-34 years). In the case of non-Luxembourgish students, they were older than natives (29.3 years vs. 26.3 years) perceived their financial situation as more difficult, and a higher percentage of their parents had an education level higher than a Bachelor's degree (father 59.2% vs 44.6% for natives; mother 51.4% vs 33.7% for natives). In addition, the father’s education was related to the LS of postgraduates and the higher was the score, the greater was the contribution to LS. Whereas for native students, when their scores of health satisfaction and career optimism were higher, their LS’ score was higher. For both groups their LS was linked to mental health-related factors, perception of their financial situation, career optimism, adaptability and planning. The higher the psychological quality of life score was, the greater the LS of postgraduates’ was. Good health and positive attitudes related to the job market enhanced their LS indicator.Keywords: career attributes, father's education level, life satisfaction, mental health
Procedia PDF Downloads 371587 Effect of Laser Ablation OTR Films on the Storability of Endive and Pak Choi by Baby Vegetables in Modified Atmosphere Condition
Authors: In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang
Abstract:
As the consumption trends of vegetables become different from the past, it is increased using vegetable more convenience such as fresh-cut vegetables, sprouts, baby vegetables rather than an existing hole piece of vegetables. Selected baby vegetables have various functional materials but they have short shelf life. This study was conducted to improve storability by using suitable laser ablation OTR (oxygen transmission rate) films. Baby vegetable of endive (Cichorium endivia L.) and pak choi (Brassica rapa chinensis) for this research, around 10 cm height, cultivated in glass greenhouse during 3 weeks. Harvested endive and pak choi were stored at 8 ℃ for 5 days and were packed by PP (Polypropylene) container and covered different types of laser ablation OTR film (DaeRyung Co., Ltd.) such as 1,300 cc, 10,000 cc, 20,000 cc, 40,000 cc /m2•day•atm, and control (perforated film) with heat sealing machine (SC200-IP, Kumkang, Korea). All the samples conducted 5 times replication. Statistical analysis was carried out using a Microsoft Excel 2010 program and results were expressed as standard deviations. The fresh weight loss rate of both baby vegetables were less than 0.3 % in treated films as maximum weight loss rate. On the other hands, control in the final storage day had around 3.0 % weight loss rate and it followed decreasing quantity. Endive had less 2.0 % carbon dioxide contents as maximum contents in 20,000 cc and 40,000 cc. Oxygen contents was maintained between 17 and 20 % in endive, 19 and 20 % in pak choi. Ethylene concentration of both vegetables maintained little lower contents in 20,000 cc treatments than others at final storage day without statistical significance. In the case of hardness, 40,000 cc film was shown little higher value at both baby vegetables without statistical significance. Visual quality was good at 10,000 cc and 20,000 cc in endive and pak choi, and off-flavor was not appeard any off-flavor in both vegetables. Chlorophyll (SPAD-502, Minolta, Japan) value of endive was shown as similar result with initial in all treatments except 20,000 cc as little lower. And chlorophyll value of pak choi decreased in all treatments compared with initial value but was not shown significantly difference each other. Color of leaves (CR-400, Minolta, Japan) changed significantly in 40,000 cc at endive. In an event of pak choi, all the treatments started yellowing by increasing hunter b value, among them control increased substantially. As above the result, 10,000 cc film was most reasonable packaging film for storing at endive and 20,000 cc at pak choi with good quality.Keywords: carbon dioxide, shelf-life, visual quality, pak choi
Procedia PDF Downloads 789586 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis
Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi
Abstract:
The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH researchKeywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis
Procedia PDF Downloads 89585 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 125584 The Meaning System of Tense: A Systemic Functional Approach
Authors: Cunyu Zhang
Abstract:
Through literature review about studies related to tense, it is found that there exist disagreements on the definition and existence of Chinese tense. Influenced by some researches on English language which regard tense as a grammatical category based on the verbal inflections of English, some Chinese researchers claim that there is no tense in Chinese language as there are no verbal inflections involved. Meanwhile, other Chinese researchers hold that Chinese still has tense although its verbs are non-inflectional based on the fact that Chinese lexical expressions can imply temporal meaning. We assume that the reasons for the above disagreements in terms of Chinese tense lie in the fact that all the previous studies prefer to view language “from the below” which means expressions of tense are the core part of these studies. However, there are about 6,000 languages with distinct expressions all over the world. Hence, if the language studies only concentrate on expressions, it must become more difficult to understand the nature of language. By contrast, functions of languages are similar; otherwise, the human beings could not communicate with each other. Therefore, we believe that it is necessary for us to have a theoretical study on Chinese tense within the framework of SFL which holds that language is a system where meaning is the core part while form is just the realization of meaning. In addition, SFL is a general linguistic providing a universal framework for languages all over the world. Therefore, based on Systemic Functional Linguistics, the paper firstly redefines tense as a deictic semantic category for describing the speaker’s temporal location of processes and relevant temporal relations. With reference to this definition, this study explores the meaning system of tense. It is proposed that tense expresses four kinds of meaning, namely interpersonal, experiential, logical and textual meanings. From the interpersonal angle, tense helps to exchange temporal information between the speaker and the listener, and the temporal information refers to the anchoring of a concerned process in the past, present or future by the speaker. From the experiential angle, tense plays a role in the temporal locating of material, mental, relational, existential, behavioral and verbal processes by the speaker. From the logical angle, tense denotes the temporal relations at the two levels of clause and clause complex, and such relations fall into simultaneity, anteriority and posteriority. From the textual angle, tense refers to the temporal relations at the level of text, and the temporal relations in question concern linear serial relations and synchronous serial relations.Keywords: Chinese, meaning system, Systemic Functional Linguistics, tense
Procedia PDF Downloads 420583 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 8582 Learning Resources as Determinants for Improving Teaching and Learning Process in Nigerian Universities
Authors: Abdulmutallib U. Baraya, Aishatu M. Chadi, Zainab A. Aliyu, Agatha Samson
Abstract:
Learning Resources is the field of study that investigates the process of analyzing, designing, developing, implementing, and evaluating learning materials, learners, and the learning process in order to improve teaching and learning in university-level education essential for empowering students and various sectors of Nigeria’s economy to succeed in a fast-changing global economy. Innovation in the information age of the 21st century is the use of educational technologies in the classroom for instructional delivery, it involves the use of appropriate educational technologies like smart boards, computers, projectors and other projected materials to facilitate learning and improve performance. The study examined learning resources as determinants for improving the teaching and learning process in Abubakar Tafawa Balewa University (ATBU), Bauchi, Bauchi state of Nigeria. Three objectives, three research questions and three null hypotheses guided the study. The study adopted a Survey research design. The population of the study was 880 lecturers. A sample of 260 was obtained using the research advisor table for determining sampling, and 250 from the sample was proportionately selected from the seven faculties. The instrument used for data collection was a structured questionnaire. The instrument was subjected to validation by two experts. The reliability of the instrument stood at 0.81, which is reliable. The researchers, assisted by six research assistants, distributed and collected the questionnaire with a 75% return rate. Data were analyzed using mean and standard deviation to answer the research questions, whereas simple linear regression was used to test the null hypotheses at a 0.05 level of significance. The findings revealed that physical facilities and digital technology tools significantly improved the teaching and learning process. Also, consumables, supplies and equipment do not significantly improve the teaching and learning process in the faculties. It was recommended that lecturers in the various faculties should strengthen and sustain the use of digital technology tools, and there is a need to strive and continue to properly maintain the available physical facilities. Also, the university management should, as a matter of priority, continue to adequately fund and upgrade equipment, consumables and supplies frequently to enhance the effectiveness of the teaching and learning process.Keywords: education, facilities, learning-resources, technology-tools
Procedia PDF Downloads 23