Search results for: statistical computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4842

Search results for: statistical computing

672 Childhood Cataract: A Socio-Clinical Study at a Public Sector Tertiary Eye Care Centre in India

Authors: Deepak Jugran, Rajesh Gill

Abstract:

Purpose: To study the demographic, sociological, gender and clinical profile of the children presented for childhood cataract at a public sector tertiary eye care centre in India. Methodology: The design of the study is retrospective, and hospital-based data is available with the Central Registration Department of the PGIMER, Chandigarh. The majority of the childhood cataract cases are being reported in this hospital, yet not each and every case of childhood cataract approaches PGI, Chandigarh. Nevertheless, this study is going to be pioneering research in India, covering five-year data of the childhood cataract patients who visited the Advanced Eye Centre, PGIMER, Chandigarh, from 1.1.2015 to 31.12.2019. The SPSS version 23 was used for all statistical calculations. Results: A Total of 354 children were presented for childhood cataract from 1.1.2015 to 31.12.2019. Out of 354 children, 248 (70%) were male, and 106 (30%) were female. In-spite of 2 flagship programmes, namely the National Programme for Control of Blindness (NPCB) and Aayushman Bharat (PM JAY) for eradication of cataract, no children received any financial assistance from these two programmes. A whopping 99% of these children belong to the poor families. In most of these families, the mothers were house-wives and did not employ anywhere. These interim results will soon be conveyed to the Govt. of India so that a suitable mechanism can be evolved to address this pertinent issue. Further, the disproportionate ratio of male and female children in this study is an area of concern as we don’t know whether the prevalence of childhood cataract is lower in female children or they are not being presented on time in the hospital by the families. Conclusion: The World Health Organization (WHO) has categorized Childhood blindness resulting from cataract as a priority area and urged all member countries to develop institutionalized mechanisms for its early detection, diagnosis and management. The childhood cataract is an emerging and major cause of preventable and avoidable childhood blindness, especially in low and middle-income countries. In the formative years, the children require a sound physical, mental and emotional state, and in the absence of either one of them, it can severely dent their future growth. The recent estimate suggests that India could suffer an economic loss of US$12 billion (Rs. 88,000 Crores) due to blindness, and almost 35% of cases of blindness are preventable and avoidable if detected at an early age. Besides reporting these results to the policy makers, synchronized efforts are needed for early detection and management of avoidable causes of childhood blindness such as childhood cataract.

Keywords: childhood blindness, cataract, Who, Npcb

Procedia PDF Downloads 95
671 The Antioxidant Gel Mask Supplies Of Bitter Melon's Extract ( Momordica charantia Linn.)

Authors: N. S. Risqina, G. Edijanti, P. S. Nurita, L. Endang, R. A. Siti, R. Tri

Abstract:

Skin is an important and vital organs and also as a mirror of health and life. Facial skin care is one of the main emphasis to get the beautiful, healthy, and fresh skin. Potentially antioxidant phenolic compounds shows, antimutagen, antitumor, anti-inflammatory, and anti-cancer. Flavonoids are a group of polyphenolic compounds that have the nature of free radicals, inhibiting the oxidative and hydrolytic enzymes as well as anti-inflammatory. Bitter melon (Momordica charantia Linn) is a plant that contains flavonoids, and phenolic antioxidant activity. Bitter melon has strong antioxidant activity that can counteract the free radicals.These compounds can prevent free radicals that cause premature aging. Gel masks including depth cleansing is the cosmetics which work in depth and could raise the dead skin cells. Measurement of antioxidant activity of the extract and gel mask is done by using the immersion method of DPPH. IC50 value of ethanol extract of bitter melon fruit of 287.932 ppm. The preparation of gel mask bitter melon fruit extract, necessary to test the effectiveness of antioxidants using DPPH method is done by measuring the inhibition of DPPH and using UV spectrophotometer at the wavelength of maximum DPPH solution. Tests conducted at the beginning and end of the evaluation (day 0 and day 28). The purpose of this study is to determine the antioxidant activity of the bitter melon's extract and to determine the antioxidant activity of ethanol extract gel mask pare in varying concentrations, ie 1xIC100 (0.295%), 2xIC100 (0.590%) and 4xIC100 (1.180%). Evaluation of physical properties of the preparation on (Day-0,7,14,21, and 28) and evaluation of antioxidant activity (day 0 and 28). Data were analyzed using One Way ANOVA to determine differences in the physical properties of each formula. The statistical results showed that differences in the formula and storage time affects the adhesion, dispersive power, dry time and pH it is shown on a significant value of p <0.05, but longer storage does not affect the pH because the significance value p> 0,05. The antioxidant test showed that there are differences in antioxidant activity in all formulas. Measurement of antioxidant activity of bitter melon fruit extract gel mask on day 0 with a concentration of 0.295%, 0.590%, and 1.180%, respectively, are 124,209.277 ppm, ppm 83819.223 and 47323.592 ppm, whereas day 28 consecutive 130 411, 495 ppm, and 53239.806 95561.645 ppm ppm. The Conclusions drawn that there are antioxidant activity in preparation gel mask of bitter melon fruit extract. The antioxidant activity of bitter melon fruit extract gel mask on the day 0 with a concentration of 0.295%, 0.590%, and 1.180%, respectively, are 124,209.277 ppm, ppm 83819.223 and 47323.592 ppm, whereas on day 28 of antioxidant activity gel mask bitter melon fruit extract with a concentration of 0.295%, 0.590%, and 1.180% in succession, namely: 130,411.495 ppm, ppm 95561.645 and 53239.806 ppm.

Keywords: antioxdant, bitter melon, gel mask, IC50

Procedia PDF Downloads 460
670 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 139
669 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 121
668 Implications of Human Cytomegalovirus as a Protective Factor in the Pathogenesis of Breast Cancer

Authors: Marissa Dallara, Amalia Ardeljan, Lexi Frankel, Nadia Obaed, Naureen Rashid, Omar Rashid

Abstract:

Human Cytomegalovirus (HCMV) is a ubiquitous virus that remains latent in approximately 60% of individuals in developed countries. Viral load is kept at a minimum due to a robust immune response that is produced in most individuals who remain asymptomatic. HCMV has been recently implicated in cancer research because it may impose oncomodulatory effects on tumor cells of which it infects, which could have an impact on the progression of cancer. HCMV has been implicated in increased pathogenicity of certain cancers such as gliomas, but in contrast, it can also exhibit anti-tumor activity. HCMV seropositivity has been recorded in tumor cells, but this may also have implications in decreased pathogenesis of certain forms of cancer such as leukemia as well as increased pathogenesis in others. This study aimed to investigate the correlation between cytomegalovirus and the incidence of breast cancer. Methods The data used in this project was extracted from a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to analyze the patients infected versus patients not infection with cytomegalovirus using ICD-10, ICD-9 codes. Permission to utilize the database was given by Holy Cross Health, Fort Lauderdale, for the purpose of academic research. Data analysis was conducted using standard statistical methods. Results The query was analyzed for dates ranging from January 2010 to December 2019, which resulted in 14,309 patients in both the infected and control groups, respectively. The two groups were matched by age range and CCI score. The incidence of breast cancer was 1.642% and 235 patients in the cytomegalovirus group compared to 4.752% and 680 patients in the control group. The difference was statistically significant by a p-value of less than 2.2x 10^-16 with an odds ratio of 0.43 (0.4 to 0.48) with a 95% confidence interval. Investigation into the effects of HCMV treatment modalities, including Valganciclovir, Cidofovir, and Foscarnet, on breast cancer in both groups was conducted, but the numbers were insufficient to yield any statistically significant correlations. Conclusion This study demonstrates a statistically significant correlation between cytomegalovirus and a reduced incidence of breast cancer. If HCMV can exert anti-tumor effects on breast cancer and inhibit growth, it could potentially be used to formulate immunotherapy that targets various types of breast cancer. Further evaluation is warranted to assess the implications of cytomegalovirus in reducing the incidence of breast cancer.

Keywords: human cytomegalovirus, breast cancer, immunotherapy, anti-tumor

Procedia PDF Downloads 193
667 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry

Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters

Abstract:

Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.

Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling

Procedia PDF Downloads 401
666 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done

Authors: Geir Kirkebøen

Abstract:

Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.

Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis

Procedia PDF Downloads 311
665 Postoperative Radiotherapy in Cancers of the Larynx: Experience of the Emir Abdelkader Cancer Center of Oran, about 89 Cases

Authors: Taleb Lotfi, Benarbia Maheidine, Allam Hamza, Boutira Fatima, Boukerche Abdelbaki

Abstract:

Introduction and purpose of the study: This is a retrospective single-center study with an analytical aim to determine the prognostic factors for relapse in patients treated with radiotherapy after total laryngectomy with lymph node dissection for laryngeal cancer at the Emir Abdelkader cancer center in Oran (Algeria). Material and methods: During the study period from January 2014 to December 2018, eighty-nine patients (n=89) with squamous cell carcinoma of the larynx were treated with postoperative radiotherapy. Relapse-free survival was studied in the univariate analysis according to pre-treatment criteria using Kaplan-Meier survival curves. We performed a univariate analysis to identify relapse factors. Statistically significant factors have been studied in the multifactorial analysis according to the Cox model. Results and statistical analysis: The average age was 62.7 years (40-86 years). It was a squamous cell carcinoma in all cases. Postoperatively, the tumor was classified as pT3 and pT4 in 93.3% of patients. Histological lymph node involvement was found in 36 cases (40.4%), with capsule rupture in 39% of cases, while the limits of surgical excision were microscopically infiltrated in 11 patients (12.3%). Chemotherapy concomitant with radiotherapy was used in 67.4% of patients. With a median follow-up of 57 months (23 to 104 months), the probabilities of relapse-free survival and five-year overall survival are 71.2% and 72.4%, respectively. The factors correlated with a high risk of relapse were locally advanced tumor stage pT4 (p=0.001), tumor site in case of subglottic extension (p=0.0003), infiltrated surgical limits R1 (p=0.001), l lymph node involvement (p=0.002), particularly in the event of lymph node capsular rupture (p=0.0003) as well as the time between surgery and adjuvant radiotherapy (p=0.001). However, in the subgroup analysis, the major prognostic factors for disease-free survival were subglottic tumor extension (p=0.001) and time from surgery to adjuvant radiotherapy (p=0.005). Conclusion: Combined surgery and postoperative radiation therapy are effective treatment modalities in the management of laryngeal cancer. Close cooperation of the entire cervicofacial oncology team is essential, expressed during a multidisciplinary consultation meeting, with the need to respect the time between surgery and radiotherapy.

Keywords: laryngeal cancer, laryngectomy, postoperative radiotherapy, survival

Procedia PDF Downloads 91
664 Meat Qualities and Death on Arrival (DOA) of Broiler Chickens Transported in a Brazilian Tropical Conditions

Authors: Arlan S. Freitas, Leila M. Carvalho, Adriana L. Soares, Arnoud Neto, Marta S. Madruga, Elza I. Ida, Massami Shimokomaki

Abstract:

The objective of this work was to evaluate the influence of microclimatic profile of broiler transport trucks under commercial conditions over the breast meat quality and DOA (Death On Arrival) in a tropical Brazilian regions as the North East where routinely the season is divided into dry and wet seasons. The temperature remains fairly constant and obviously the relative humidity changes accordingly. Three loads of 4,100 forty seven days old broiler were monitored from farm to slaughterhouse in a distance of 4.3 km, morning period of October 2015 rainy days. The profile of the environmental variables inside the container truck throughout the journey was obtained by the installation of thermo anemometers in 6 different locations by monitoring the heat index (HI), air velocity (AV), temperature (T), and relative humidity (RH). Meat qualities were evaluated by determining the occurrence of PSE (pale, soft, exudative) meat and DFD (dark, firm dry) meat. The percentage of birds DOA per loaded truck was determined by counting the dead broiler during the hanging step at the slaughtering plant. The analysis of variance was performed using statistical software (Statistica 8 for windows, Statsoft 2007, Tulsa, OK, USA). The Tukey significance test (P<0.05) was applied to compare means from microenvironmental data, PSE, DFD and DOA. Fillet samples were collected at 24h post mortem for pH e color (L*, a* e b*) determination through the CIELAB system. Results showed the occurrence of 2.98% of PSE and 0.66% de DFD and only 0.016% of DOA and overall the most uncomfortable container location was at the truck frontal inferior presenting 6.25% of PSE. DFD of 2.0% were obtained from birds located at central and inferior rear locations. These values were unexpected in comparison to other results obtained in our laboratories in previous experiments carried out within the country south state. The results reported herein were lower in every aspect. Reasonable explanation would be the shorter distance, wet conditions throughout around 15-20 min journeys and lower T and RH values as observed in samples taken from the rear location as higher DFD values were obtained. These facts mean the animals were not under heat stressful condition but in fact under cold stress conditions as the result of DFD suggested in association to the lower number of DOA.

Keywords: cold stress, DFD, microclimatic profile, PSE

Procedia PDF Downloads 225
663 Fillet Chemical Composition of Sharpsnout Seabream (Diplodus puntazzo) from Wild and Cage-Cultured Conditions

Authors: Oğuz Taşbozan, Celal Erbaş, Şefik Surhan Tabakoğlu, Mahmut Ali Gökçe

Abstract:

Polyunsaturated fatty acids (PUFAs) and particularly the levels and ratios of ω-3 and ω-6 fatty acids are important for biological functions in humans and recognized as essential components of human diet. According to the terms of many different points of view, the nutritional composition of fish in culture conditions and caught from wild are wondered by the consumers. Therefore the aim of this study was to investigate the chemical composition of cage-cultured and wild sharpsnout seabream which has been preferred by the consumers as an economical important fish species in Turkey. The fish were caught from wild and obtained from cage-cultured commercial companies. Eight fish were obtained for each group, and their average weights of the samples were 245.8±13.5 g for cultured, 149.4±13.3 g for wild samples. All samples were stored in freezer (-18 °C) and analyses were carried out in triplicates, using homogenized boneless fish fillets. Proximate compositions (protein, ash, moisture and lipid) were determined. The fatty acid composition was analyzed by a GC Clarous 500 with auto sampler (Perkin–Elmer, USA). Proximate compositions of cage-cultured and wild samples of sharpsnout seabream were found statistical differences in terms of proximate composition between the groups. The saturated fatty acid (SFA), monounsaturated fatty acid (MUFA) and PUFA amounts of cultured and wild sharpsnout seabream were significantly different. ω3/ω6 ratio was higher in the cultured group. Especially in protein level and lipid level of cultured samples was significantly higher than wild counterparts. One of the reasons for this, cultured species exposed to continuous feeding. This situation had a direct effect on their body lipid content. The fatty acid composition of fish differs depending on a variety of factors including species, diet, environmental factors and whether they are farmed or wild. The higher levels of MUFA in the cultured fish may be explained with the high content of monoenoic fatty acids in the feed of cultured fish as in some other species. The ω3/ω6 ratio is a good index for comparing the relative nutritional value of fish oils. In our study, the cultured sharpsnout seabream appears to be better nutritious in terms of ω3/ω6. Acknowledgement: This work was supported by the Scientific Research Project Unit of the University of Cukurova, Turkey under grant no FBA-2016-5780.

Keywords: Diplodus puntazo, cage cultured, PUFA, fatty acid

Procedia PDF Downloads 254
662 The Opinions of Counselor Candidates' regarding Universal Values in Marriage Relationship

Authors: Seval Kizildag, Ozge Can Aran

Abstract:

The effective intervention of counselors’ in conflict between spouses may be effective in increasing the quality of marital relationship. At this point, it is necessary for counselors to consider their own value systems at first and then reflect this correctly to the counseling process. For this reason, it is primarily important to determine the needs of counselors. Starting from this point of view, in this study, it is aimed to reveal the perspective of counselor candidates about the universal values in marriage relation. The study group of the survey was formed by sampling, which is one of the prospective sampling methods. As a criterion being a candidate for counseling area and having knowledge of the concepts of the Marriage and Family Counseling course is based, because, that candidate students have a comprehensive knowledge of the field and that students have mastered the concepts of marriage and family counseling will strengthen the findings of this study. For this reason, 61 counselor candidates, 32 (52%) female and 29 (48%) male counselor candidates, who were about to graduate from a university in south-east Turkey and who took a Marriage and Family Counseling course, voluntarily participated in the study. The average age of counselor candidates’ is 23. At the same time, 70 % of the parents of these candidates brought about their marriage through arranged marriage, 13% through flirting, 8% by relative marriage, 7% through friend circles and 2% by custom. The data were collected through Demographic Information Form and a form titled ‘Universal Values Form in Marriage’ which consists of six questions prepared by researchers. After the data were transferred to the computer, necessary statistical evaluations were made on the data. The qualitative data analysis was used on the data which was obtained in the study. The universal values which include six basic values covering trustworthiness, respect, responsibility, fairness, caring, citizenship, determined under the name as ‘six pillar of character’ are used as base and frequency values of the data were calculated trough content analysis. According to the findings of the study, while the value which most students find the most important value in marriage relation is being reliable, the value which they find the least important is to have citizenship consciousness. Also in this study, it is found out that counselor candidates associate the value of being trustworthiness ‘loyalty’ with (33%) as the highest in terms of frequency, the value of being respect ‘No violence’ with (23%), the value of responsibility ‘in the context of gender roles and spouses doing their owns’ with (35%) the value of being fairness ‘impartiality’ with (25%), the value of being caring ‘ being helpful’ with (25%) and finally as to the value of citizenship ‘love of country’ with (14%) and’ respect for the laws ‘ with (14%). It is believed that these results of the study will contribute to the arrangements for the development of counseling skills for counselor candidates regarding value in marriage and family counseling curricula.

Keywords: caring, citizenship, counselor candidate, fairness, marriage relationship, respect, responsibility, trustworthiness, value system

Procedia PDF Downloads 262
661 Assessment of Influence of Short-Lasting Whole-Body Vibration on the Proprioception of Lower Limbs

Authors: Sebastian Wójtowicz, Anna Mosiołek, Anna Słupik, Zbigniew Wroński, Dariusz Białoszewski

Abstract:

Introduction: In whole-body vibration (WBV) high-frequency mechanical stimuli is generated by a vibration plate and is transferred through bone, muscle and connective tissues to the whole body. The research has shown that the implementation of a vibration plate training over a long period of time leads to improvement of neuromuscular facilitation, especially in afferent neural pathways, which are responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance, and proprioception. The vibration stimulus is suggested to briefly inhibit the conduction of afferent signals from proprioceptors and may hinder the maintenance of body balance. The purpose of this study was to evaluate the result of a single set of exercises connected with whole-body vibration on the proprioception. Material and Methods: The study enrolled 60 people aged 19-24 years. These individuals were divided into a test group (group A) and a control group (group B). Both groups consisted of 30 persons and performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the group A. The vibration plate was turned off while the control group did their exercises. All participants performed six dynamic 30-seconds-long exercises with a 60-second resting period between them. Large muscle groups of the trunk, pelvis, and lower limbs were involved while taking the exercises. The results were measured before and immediately after the exercises. The proprioception of lower limbs was measured in a closed kinematic chain using a Humac 360®. Participants were instructed to perform three squats with biofeedback in a defined range of motion. Then they did three squats without biofeedback which were measured. The final result was the average of three measurements. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p > 0.05). The proprioception did not change in both the group A and the group B. Conclusions: 1. Deterioration in proprioception was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can only have a short-lasting effect occurring only in the presence of the vibration stimulus. 2. Short-term use of vibration seems to be safe for patients with proprioceptive impairment due to the fact that the treatment does not decrease proprioception. 3. There is a need for supplementing the results with evaluation of proprioception while vibration stimuli are being applied. Moreover, the effects of vibration parameters used in the exercises should be evaluated.

Keywords: joint position sense, proprioception, squat, whole body vibration

Procedia PDF Downloads 453
660 Investigate the Competencies Required for Sustainable Entrepreneurship Development in Agricultural Higher Education

Authors: Ehsan Moradi, Parisa Paikhaste, Amir Alam Beigi, Seyedeh Somayeh Bathaei

Abstract:

The need for entrepreneurial sustainability is as important as the entrepreneurship category itself. By transferring competencies in a sustainable entrepreneurship framework, entrepreneurship education can make a significant contribution to the effectiveness of businesses, especially for start-up entrepreneurs. This study analyzes the essential competencies of students in the development of sustainable entrepreneurship. It is an applied causal study in terms of nature and field in terms of data collection. The main purpose of this research project is to study and explain the dimensions of sustainability entrepreneurship competencies among agricultural students. The statistical population consists of 730 junior and senior undergraduate students of the Campus of Agriculture and Natural Resources, University of Tehran. The sample size was determined to be 120 using the Cochran's formula, and the convenience sampling method was used. Face validity, structure validity, and diagnostic methods were used to evaluate the validity of the research tool and Cronbach's alpha and composite reliability to evaluate its reliability. Structural equation modeling (SEM) was used by the confirmatory factor analysis (CFA) method to prepare a measurement model for data processing. The results showed that seven key dimensions play a role in shaping sustainable entrepreneurial development competencies: systems thinking competence (STC), embracing diversity and interdisciplinary (EDI), foresighted thinking (FTC), normative competence (NC), action competence (AC), interpersonal competence (IC), and strategic management competence (SMC). It was found that acquiring skills in SMC by creating the ability to plan to achieve sustainable entrepreneurship in students through the relevant mechanisms can improve entrepreneurship in students by adopting a sustainability attitude. While increasing students' analytical ability in the field of social and environmental needs and challenges and emphasizing curriculum updates, AC should pay more attention to the relationship between the curriculum and its content in the form of entrepreneurship culture promotion programs. In the field of EDI, it was found that the success of entrepreneurs in terms of sustainability and business sustainability of start-up entrepreneurs depends on their interdisciplinary thinking. It was also found that STC plays an important role in explaining the relationship between sustainability and entrepreneurship. Therefore, focusing on these competencies in agricultural education to train start-up entrepreneurs can lead to sustainable entrepreneurship in the agricultural higher education system.

Keywords: sustainable entrepreneurship, entrepreneurship education, competency, agricultural higher education

Procedia PDF Downloads 129
659 Possibilities and Prospects for the Development of the Agricultural Insurance Market (The Example of Georgia)

Authors: Nino Damenia

Abstract:

The agricultural sector plays an important role in the development of Georgia's economy, it contributes to employment and food security. It faces various types of risks that may lead to heavy financial losses. Agricultural insurance is one of the means of combating agricultural risks. The paper discusses the agricultural insurance experience of those countries (European countries and the USA) that have successfully implemented the agricultural insurance program. Analysis of international cases shows that a well-designed and implemented agri-insurance system can bring significant benefits to farmers, insurance companies and the economy as a whole. In the background of all this, the Government of Georgia recognized the importance of agro-insurance and took important steps for its development. In 2014, in cooperation with insurance companies, an agro-insurance program was introduced, the purpose of which is to increase the availability of insurance for farmers and stimulate the agro-insurance market. Despite such a step forward, challenges remain such as awareness of farmers, insufficient infrastructure for data collection and risk assessment, involvement of insurance companies and other important factors. With the support of the government and stakeholders, it is possible to overcome the existing challenges and establish a strong and effective agro-insurance system. Objectives. The purpose of the research is to analyze the development trends of the agricultural insurance market, to identify the main factors affecting its growth, and to further develop recommendations for development prospects for Georgia. Methodologies. The research uses mixed methods, which combine qualitative and quantitative research techniques. The qualitative method includes the study of the literature of Georgian and foreign economists, which allows us to get acquainted with the challenges, opportunities, legislative and regulatory frameworks of agricultural insurance. Quantitative analysis involves collecting data from stakeholders and then analyzing it. The paper also uses the methods of synthesis, comparison and statistical analysis of the agricultural insurance market in Georgia, Europe and the USA. Conclusions. As the main results of the research, we can consider that the analysis of the insurance market has been made and its main functions have been identified; The essence, features and functions of agricultural insurance are analyzed; European and US agricultural insurance market is researched; The stages of formation and development of the agricultural insurance market of Georgia are studied, its importance for the agricultural sector of Georgia is determined; The role of the state for the development of agro-insurance is analyzed and development prospects are established based on the study of the current trends of the agro-insurance market of Georgia.

Keywords: agricultural insurance, agriculture, agricultural insurance program, risk

Procedia PDF Downloads 45
658 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo

Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi

Abstract:

The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).

Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency

Procedia PDF Downloads 395
657 A Construction Management Tool: Determining a Project Schedule Typical Behaviors Using Cluster Analysis

Authors: Natalia Rudeli, Elisabeth Viles, Adrian Santilli

Abstract:

Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.

Keywords: cluster analysis, construction management, earned value, schedule

Procedia PDF Downloads 249
656 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 150
655 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 65
654 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 165
653 Efficacy and Safety of Updated Target Therapies for Treatment of Platinum-Resistant Recurrent Ovarian Cancer

Authors: John Hang Leung, Shyh-Yau Wang, Hei-Tung Yip, Fion, Ho Tsung-chin, Agnes LF Chan

Abstract:

Objectives: Platinum-resistant ovarian cancer has a short overall survival of 9–12 months and limited treatment options. The combination of immunotherapy and targeted therapy appears to be a promising treatment option for patients with ovarian cancer, particularly to patients with platinum-resistant recurrent ovarian cancer (PRrOC). However, there are no direct head-to-head clinical trials comparing their efficacy and toxicity. We, therefore, used a network to directly and indirectly compare seven newer immunotherapies or targeted therapies combined with chemotherapy in platinum-resistant relapsed ovarian cancer, including antibody-drug conjugates, PD-1 (Programmed death-1) and PD-L1 (Programmed death-ligand 1), PARP (Poly ADP-ribose polymerase) inhibitors, TKIs (Tyrosine kinase inhibitors), and antiangiogenic agents. Methods: We searched PubMed (Public/Publisher MEDLINE), EMBASE (Excerpta Medica Database), and the Cochrane Library electronic databases for phase II and III trials involving PRrOC patients treated with immunotherapy or targeted therapy plus chemotherapy. The quality of included trials was assessed using the GRADE method. The primary outcomes compared were progression-free survival, the secondary outcomes were overall survival and safety. Results: Seven randomized controlled trials involving a total of 2058 PRrOC patients were included in this analysis. Bevacizumab plus chemotherapy showed statistically significant differences in PFS (Progression-free survival) but not OS (Overall survival) for all interested targets and immunotherapy regimens; however, according to the heatmap analysis, bevacizumab plus chemotherapy had a statistically significant risk of ≥grade 3 SAEs (Severe adverse effects), particularly hematological severe adverse events (neutropenia, anemia, leukopenia, and thrombocytopenia). Conclusions: Bevacizumab plus chemotherapy resulted in better PFS as compared with all interested regimens for the treatment of PRrOC. However, statistical differences in SAEs as bevacizumab plus chemotherapy is associated with a greater risk for hematological SAE.

Keywords: platinum-resistant recurrent ovarian cancer, network meta-analysis, immune checkpoint inhibitors, target therapy, antiangiogenic agents

Procedia PDF Downloads 65
652 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1463
651 Inf-γ and Il-2 Asses the Therapeutic Response in Anti-tuberculosis Patients at Jamot Hospital Yaounde, Cameroon

Authors: Alexandra Emmanuelle Membangbi, Jacky Njiki Bikoï, Esther Del-florence Moni Ndedi, Marie Joseph Nkodo Mindimi, Donatien Serge Mbaga, Elsa Nguiffo Makue, André Chris Mikangue Mbongue, Martha Mesembe, George Ikomey Mondinde, Eric Walter Perfura-yone, Sara Honorine Riwom Essama

Abstract:

Background: Tuberculosis (TB) is one of the top lethal infectious diseases worldwide. In recent years, interferon-γ (INF-γ) release assays (IGRAs) have been established as routine tests for diagnosing TB infection. However, produced INF-γ assessment failed to distinguish active TB (ATB) from latent TB infection (LTBI), especially in TB epidemic areas. In addition to IFN-γ, interleukin-2 (IL-2), another cytokine secreted by activated T cells, is also involved in immune response against Mycobacterium tuberculosis. The aim of the study was to assess the capacity of IFN-γ and IL2 to evaluate the therapeutic response of patients on anti-tuberculosis treatment. Material and Methods: We conducted a cross-sectional study in the Pneumonology Departments of the Jamot Hospital in Yaoundé between May and August 2021. After signed the informed consent, the sociodemographic data, as well as 5 mL of blood, were collected in the crook of the elbow of each participant. Sixty-one subjects were selected (n= 61) and divided into 4 groups as followed: group 1: resistant tuberculosis (n=13), group 2: active tuberculosis (n=19), group 3 cured tuberculosis (n=16), and group 4: presumed healthy persons (n=13). The cytokines of interest were determined using an indirect Enzyme-linked Immuno-Sorbent Assay (ELISA) according to the manufacturer's recommendations. P-values < 0.05 were interpreted as statistically significant. All statistical calculations were performed using SPSS version 22.0 Results: The results showed that men were more 14/61 infected (31,8%) with a high presence in active and resistant TB groups. The mean age was 41.3±13.1 years with a 95% CI = [38.2-44.7], the age group with the highest infection rate was ranged between 31 and 40 years. The IL-2 and INF-γ means were respectively 327.6±160.6 pg/mL and 26.6±13.0 pg/mL in active tuberculosis patients, 251.1±30.9 pg/mL and 21.4±9.2 pg/mL in patients with resistant tuberculosis, while it was 149.3±93.3 pg/mL and 17.9±9.4 pg/mL in cured patients, 15.1±8.4 pg/mL and 5.3±2.6 pg/mL in participants presumed healthy (p <0.0001). Significant differences in IFN-γ and IL-2 rates were observed between the different groups. Conclusion: Monitoring the serum levels of INF-γ and IL-2 would be useful to evaluate the therapeutic response of anti-tuberculosis patients, particularly in the both cytokines association case, that could improve the accuracy of routine examinations.

Keywords: antibiotic therapy, interferon gamma, interleukin 2, tuberculosis

Procedia PDF Downloads 98
650 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 22
649 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 411
648 Obstacles and Ways-Forward to Upgrading Nigeria Basic Nursing Schools: A Survey of Perception of Teaching Hospitals’ Nurse Trainers and Stakeholders

Authors: Chijioke Oliver Nwodoh, Jonah Ikechukwu Eze, Loretta Chika Ukwuaba, Ifeoma Ndubuisi, Ada Carol Nwaneri, Ijeoma Lewechi Okoronkwo

Abstract:

Presence of nursing workforce with unequal qualification and status in Nigeria has undermined the growth of nursing profession in the country. Upgrading of the existing basic and post-basic nursing schools to degree-awarding institutions in Nigeria is a way-forward to solving this inequality problem and Nigeria teaching hospitals are in vantage position for this project due to the already existing supportive structure and manpower in those hospitals. What the nurse trainers and the stakeholders of the teaching hospitals may hold for or against the upgrading is a determining factor for the upgrading project, but that is not clear and has not been investigated in Nigeria. The study investigated the perception of nurse trainers and stakeholders of teaching hospitals in Enugu State of Nigeria on the obstacles and ways-forward to upgrading nursing schools to degree-awarding institutions in Nigeria. The study specifically elicited what the subjects may view as obstacles to upgrading basic and post-basic nursing schools to degree-awarding institutions in Nigeria and ascertained their suggestions on the possible ways of overcoming the obstacles. By utilizing cross-sectional descriptive design and a purposive sampling procedure, 78 accessible subjects out of a total population of 87 were used for the study. The generated data from the subjects were analyzed using frequencies, percentages and mean for the research questions and Pearson’s chi-square for the hypotheses, with the aid of Statistical Package for Social Sciences Version 20.0. The result showed that lack of extant policy, fund, and disunity among policy makers and stakeholders of nursing profession are the main obstacles to the upgrading. However, the respondents did not see items like: stakeholders and nurse trainers of basic and post-basic schools of nursing; fear of admitting and producing poor quality nurses; and so forth, as obstacles to the upgrading project. Institution of the upgrading policy by Nursing and Midwifery Council of Nigeria, funding, awareness creation for the upgrading and unison among policy makers and stakeholders of nursing profession are the major possible ways to overcome the obstacles. The difference in the subjects’ perceptions between the two hospitals was found to be statistically insignificant (p > 0.05). It is recommended that the policy makers and stakeholders of nursing in Nigeria should unite and liaise with Federal Ministries of Health and Education for modalities and actualization of upgrading nursing schools to degree-awarding institutions in Nigeria.

Keywords: nurse trainers, obstacles, perception, stakeholders, teaching hospital, upgrading basic nursing schools, ways-forward

Procedia PDF Downloads 134
647 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 121
646 Antibacterial Effect of Silver Diamine Fluoride Incorporated in Fissure Sealants

Authors: Nélio Veiga, Paula Ferreira, Tiago Correia, Maria J. Correia, Carlos Pereira, Odete Amaral, Ilídio J. Correia

Abstract:

Introduction: The application of fissure sealants is considered to be an important primary prevention method used in dental medicine. However, the formation of microleakage gaps between tooth enamel and the fissure sealant applied is one of the most common reasons of dental caries development in teeth with fissure sealants. The association between various dental biomaterials may limit the major disadvantages and limitations of biomaterials functioning in a complementary manner. The present study consists in the incorporation of a cariostatic agent – silver diamine fluoride (SDF) – in a resin-based fissure sealant followed by the study of release kinetics by spectrophotometry analysis of the association between both biomaterials and assessment of the inhibitory effect on the growth of the reference bacterial strain Streptococcus mutans (S. mutans) in an in vitro study. Materials and Methods: An experimental in vitro study was designed consisting in the entrapment of SDF (Cariestop® 12% and 30%) into a commercially available fissure sealant (Fissurit®), by photopolymerization and photocrosslinking. The same sealant, without SDF was used as a negative control. The effect of the sealants on the growth of S. mutans was determined by the presence of bacterial inhibitory halos in the cultures at the end of the incubation period. In order to confirm the absence of bacteria in the surface of the materials, Scanning Electron Microscopy (SEM) characterization was performed. Also, to analyze the release profile of SDF along time, spectrophotometry technique was applied. Results: The obtained results indicate that the association of SDF to a resin-based fissure sealant may be able to increase the inhibition of S. mutans growth. However, no SDF release was noticed during the in vitro release studies and no statistical significant difference was verified when comparing the inhibitory halo sizes obtained for test and control group.  Conclusions: In this study, the entrapment of SDF in the resin-based fissure sealant did not potentiate the antibacterial effect of the fissure sealant or avoid the immediate development of dental caries. The development of more laboratorial research and, afterwards, long-term clinical data are necessary in order to verify if this association between these biomaterials is effective and can be considered for being used in oral health management. Also, other methodologies for associating cariostatic agents and sealant should be addressed.

Keywords: biomaterial, fissure sealant, primary prevention, silver diamine fluoride

Procedia PDF Downloads 249
645 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 210
644 Chemical Composition of Volatiles Emitted from Ziziphus jujuba Miller Collected during Different Growth Stages

Authors: Rose Vanessa Bandeira Reidel, Bernardo Melai, Pier Luigi Cioni, Luisa Pistelli

Abstract:

Ziziphus jujuba Miller is a common species of the Ziziphus genus (Rhamnaceae family) native to the tropics and subtropics known for its edible fruits, fresh consumed or used in healthy food, as flavoring and sweetener. Many phytochemicals and biological activities are described for this species. In this work, the aroma profiles emitted in vivo by whole fresh organs (leaf, bud flower, flower, green and red fruits) were analyzed separately by mean of solid phase micro-extraction (SPME) coupled with gas chromatography mass spectrometry (GC-MS). The emitted volatiles from different plant parts were analysed using Supelco SPME device coated with polydimethylsiloxane (PDMS, 100µm). Fresh plant material was introduced separately into a glass conical flask and allowed to equilibrate for 20 min. After the equilibration time, the fibre was exposed to the headspace for 15 min at room temperature, the fibre was re-inserted into the needle and transferred to the injector of the CG and CG-MS system, where the fibre was desorbed. All the data were submitted to multivariate statistical analysis, evidencing many differences amongst the selected plant parts and their developmental stages. A total of 144 compounds were identified corresponding to 94.6-99.4% of the whole aroma profile of jujube samples. Sesquiterpene hydrocarbons were the main chemical class of compounds in leaves also present in similar percentage in flowers and bud flowers where (E, E)-α-farnesene was the main constituent in all cited plant parts. This behavior can be due to a protection mechanism against pathogens and herbivores as well as resistance to abiotic factors. The aroma of green fruits was characterized by high amount of perillene while the red fruits release a volatile blend mainly constituted by different monoterpenes. The terpenoid emission of flesh fruits has important function in the interaction with animals including attraction of seed dispersers and it is related to a good quality of fruits. This study provides for the first time the chemical composition of the volatile emission from different Ziziphus jujuba organs. The SPME analyses of the collected samples showed different patterns of emission and can contribute to understand their ecological interactions and fruit production management.

Keywords: Rhamnaceae, aroma profile, jujube organs, HS-SPME, GC-MS

Procedia PDF Downloads 238
643 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 49