Search results for: control performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21631

Search results for: control performance

1291 The Decision-Making Mechanisms of Tax Regulations

Authors: Nino Pailodze, Malkhaz Sulashvili, Vladimer Kekenadze, Tea Khutsishvili, Irma Makharashvili, Aleksandre Kekenadze

Abstract:

In the nearest future among the important problems which Georgia has solve the most important is economic stability, that bases on fiscal policy and the proper definition of the its directions. The main source of the Budget revenue is the national income. The State uses taxes, loans and emission in order to create national income, were the principal weapon are taxes. As well as fiscal function of the fulfillment of the budget, tax systems successfully implement economic and social development and the regulatory functions of foreign economic relations. A tax is a mandatory, unconditional monetary payment to the budget made by a taxpayer in accordance with this Code, based on the necessary, nonequivalent and gratuitous character of the payment. Taxes shall be national and local. National taxes shall be the taxes provided for under this Code, the payment of which is mandatory across the whole territory of Georgia. Local taxes shall be the taxes provided for under this Code, introduced by normative acts of local self-government representative authorities (within marginal rates), the payment of which is mandatory within the territory of the relevant self-governing unit. National taxes have the leading role in tax systems, but also the local taxes have an importance role in tax systems. Exactly in the means of local taxes, the most part of the budget is formatted. National taxes shall be: income tax, profit tax, value added tax (VAT), excise tax, import duty, property tax shall be a local tax The property tax is one of the significant taxes in Georgia. The paper deals with the taxation mechanism that has been operated in Georgia. The above mention has the great influence in financial accounting. While comparing foreign legislation towards Georgian legislation we discuss the opportunity of using their experience. Also, we suggested recommendations in order to improve the tax system in financial accounting. In addition to accounting, which is regulated according the International Accounting Standards we have tax accounting, which is regulated by the Tax Code, various legal orders / regulations of the Minister of Finance. The rules are controlled by the tax authority, Revenue Service. The tax burden from the tax values are directly related to expenditures of the state from the emergence of the first day. Fiscal policy of the state is as well as expenditure of the state and decisions of taxation. In order to get the best and the most effective mobilization of funds, Government’s primary task is to decide the kind of taxation rules. Tax function is to reveal the substance of the act. Taxes have the following functions: distribution or the fiscal function; Control and regulatory functions. Foreign tax systems evolved in the different economic, political and social conditions influence. The tax systems differ greatly from each other: taxes, their structure, typing means, rates, the different levels of fiscal authority, the tax base, the tax sphere of action, the tax breaks.

Keywords: international accounting standards, financial accounting, tax systems, financial obligations

Procedia PDF Downloads 242
1290 The Effects of Periostin in a Rat Model of Isoproterenol-Mediated Cardiotoxicity

Authors: Mahmut Sozmen, Alparslan Kadir Devrim, Yonca Betil Kabak, Tuba Devrim

Abstract:

Acute myocardial infarction is the leading cause of deaths in the worldwide. Mature cardiomyocytes do not have the ability to regenerate instead fibrous tissue proliferate and granulation tissue to fill out. Periostin is an extracellular matrix protein from fasciclin family and it plays an important role in the cell adhesion, migration, and growth of the organism. Periostin prevents apoptosis while stimulating cardiomyocytes. The main objective of this project is to investigate the effects of the recombinant murine periostin peptide administration for the cardiomyocyte regeneration in a rat model of acute myocardial infarction. The experiment was performed on 84 male rats (6 months old) in 4 group each contains 21 rats. Saline applied subcutaneously (1 ml/kg) two times with 24 hours intervals to the rats in control group (Group 1). Recombinant periostin peptide (1 μg/kg) dissolved in saline applied intraperitoneally in group 2 on 1, 3, 7, 14 and 21. days on same dates in group 4. Isoproterenol dissolved in saline applied intraperitoneally (85mg/kg/day) two times with 24 hours intervals to the groups 3 and 4. Rats in group 4 further received recombinant periostin peptide (1 μg/kg) dissolved in saline intraperitoneally starting one day after the final isoproterenol administration on days 1, 3, 7, 14 and 21. Following the final application of periostin rats continued to feed routinely with pelleted chow and water ad libitum for further seven days. At the end of 7th day rats sacrificed, blood and heart tissue samples collected for the immunohistochemical and biochemical analysis. Angiogenesis in response to tissue damage, is a highly dynamic process regulated by signals from the surrounding extracellular matrix and blood serum. In this project, VEGF, ANGPT, bFGF, TGFβ are the key factors that contribute to cardiomyocyte regeneration were investigated. Additionally, the relationship between mitosis and apoptosis (Bcl-2, Bax, PCNA, Ki-67, Phopho-Histone H3), cell cycle activators and inhibitors (Cyclin D1, D2, A2, Cdc2), the origin of regenerating cells (cKit and CD45) were examined. Present results revealed that periostin stimulated cardiomyocye cell-cycle re-entry in both normal and MCA damaged cardiomyocytes and increased angiogenesis. Thus, periostin contributes to cardiomyocyte regeneration during the healing period following myocardial infarction which provides a better understanding of its role of this mechanism, improving recovery rates and it is expected to contribute the lack of literature on this subject. Acknowledgement: This project was financially supported by Turkish Scientific Research Council- Agriculture, Forestry and Veterinary Research Support Group (TUBİTAK-TOVAG; Project No: 114O734), Ankara, TURKEY.

Keywords: cardiotoxicity, immunohistochemistry, isoproterenol, periostin

Procedia PDF Downloads 233
1289 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing

Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş

Abstract:

Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.

Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model

Procedia PDF Downloads 81
1288 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 340
1287 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 241
1286 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR

Authors: Ivana Scidà, Francesco Alotto, Anna Osello

Abstract:

Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.

Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality

Procedia PDF Downloads 130
1285 A New Perspective in Cervical Dystonia: Neurocognitive Impairment

Authors: Yesim Sucullu Karadag, Pinar Kurt, Sule Bilen, Nese Subutay Oztekin, Fikri Ak

Abstract:

Background: Primary cervical dystonia is thought to be a purely motor disorder. But recent studies revealed that patients with dystonia had additional non-motor features. Sensory and psychiatric disturbances could be included into the non-motor spectrum of dystonia. The Basal Ganglia receive inputs from all cortical areas and throughout the thalamus project to several cortical areas, thus participating to circuits that have been linked to motor as well as sensory, emotional and cognitive functions. However, there are limited studies indicating cognitive impairment in patients with cervical dystonia. More evidence is required regarding neurocognitive functioning in these patients. Objective: This study is aimed to investigate neurocognitive profile of cervical dystonia patients in comparison to healthy controls (HC) by employing a detailed set of neuropsychological tests in addition to self-reported instruments. Methods: Totally 29 (M/F: 7/22) cervical dystonia patients and 30 HC (M/F: 10/20) were included into the study. Exclusion criteria were depression and not given informed consent. Standard demographic, educational data and clinical reports (disease duration, disability index) were recorded for all patients. After a careful neurological evaluation, all subjects were given a comprehensive battery of neuropsychological tests: Self report of neuropsychological condition (by visual analogue scale-VAS, 0-100), RAVLT, STROOP, PASAT, TMT, SDMT, JLOT, DST, COWAT, ACTT, and FST. Patients and HC were compared regarding demographic, clinical features and neurocognitive tests. Also correlation between disease duration, disability index and self report -VAS were assessed. Results: There was no difference between patients and HCs regarding socio-demographic variables such as age, gender and years of education (p levels were 0.36, 0.436, 0.869; respectively). All of the patients were assessed at the peak of botulinum toxine effect and they were not taking an anticholinergic agent or benzodiazepine. Dystonia patients had significantly impaired verbal learning and memory (RAVLT, p<0.001), divided attention and working memory (ACTT, p<0.001), attention speed (TMT-A and B, p=0.008, 0.050), executive functions (PASAT, p<0.001; SDMT, p= 0.001; FST, p<0.001), verbal attention (DST, p=0.001), verbal fluency (COWAT, p<0.001), visio-spatial processing (JLOT, p<0.001) in comparison to healthy controls. But focused attention (STROOP-spontaneous correction) was not different between two groups (p>0.05). No relationship was found regarding disease duration and disability index with any neurocognitive tests. Conclusions: Our study showed that neurocognitive functions of dystonia patients were worse than control group with the similar age, sex, and education independently clinical expression like disease duration and disability index. This situation may be the result of possible cortical and subcortical changes in dystonia patients. Advanced neuroimaging techniques might be helpful to explain these changes in cervical dystonia patients.

Keywords: cervical dystonia, neurocognitive impairment, neuropsychological test, dystonia disability index

Procedia PDF Downloads 418
1284 DEKA-1 a Dose-Finding Phase 1 Trial: Observing Safety and Biomarkers using DK210 (EGFR) for Inoperable Locally Advanced and/or Metastatic EGFR+ Tumors with Progressive Disease Failing Systemic Therapy

Authors: Spira A., Marabelle A., Kientop D., Moser E., Mumm J.

Abstract:

Background: Both interleukin-2 (IL-2) and interleukin-10 (IL-10) have been extensively studied for their stimulatory function on T cells and their potential to obtain sustainable tumor control in RCC, melanoma, lung, and pancreatic cancer as monotherapy, as well as combination with PD-1 blockers, radiation, and chemotherapy. While approved, IL-2 retains significant toxicity, preventing its widespread use. The significant efforts undertaken to uncouple IL-2 toxicity from its anti-tumor function have been unsuccessful, and early phase clinical safety observed with PEGylated IL-10 was not met in a blinded Phase 3 trial. Deka Biosciences has engineered a novel molecule coupling wild-type IL-2 to a high affinity variant of Epstein Barr Viral (EBV) IL-10 via a scaffold (scFv) that binds to epidermal growth factor receptors (EGFR). This patented molecule, termed DK210 (EGFR), is retained at high levels within the tumor microenvironment for days after dosing. In addition to overlapping and non-redundant anti-tumor function, IL-10 reduces IL-2 mediated cytokine release syndrome risks and inhibits IL-2 mediated T regulatory cell proliferation. Methods: DK210 (EGFR) is being evaluated in an open-label, dose-escalation (Phase 1) study with 5 (0.025-0.3 mg/kg) monotherapy dose levels and (expansion cohorts) in combination with PD-1 blockers, or radiation or chemotherapy in patients with advanced solid tumors overexpressing EGFR. Key eligibility criteria include 1) confirmed progressive disease on at least one line of systemic treatment, 2) EGFR overexpression or amplification documented in histology reports, 3) at least a 4 week or 5 half-lives window since last treatment, and 4) excluding subjects with long QT syndrome, multiple myeloma, multiple sclerosis, myasthenia gravis or uncontrolled infectious, psychiatric, neurologic, or cancer disease. Plasma and tissue samples will be investigated for pharmacodynamic and predictive biomarkers and genetic signatures associated with IFN-gamma secretion, aiming to select subjects for treatment in Phase 2. Conclusion: Through successful coupling of wild-type IL-2 with a high affinity IL-10 and targeting directly to the tumor microenvironment, DK210 (EGFR) has the potential to harness IL-2 and IL-10’s known anti-cancer promise while reducing immunogenicity and toxicity risks enabling safe concomitant cytokine treatment with other anti-cancer modalities.

Keywords: cytokine, EGFR over expression, interleukine-2, interleukine-10, clinical trial

Procedia PDF Downloads 85
1283 Changing Employment Relations Practices in Hong Kong: Cases of Two Multinational Retail Banks since 1997

Authors: Teresa Shuk-Ching Poon

Abstract:

This paper sets out to examine the changing employment relations practices in Hong Kong’s retail banking sector over a period of more than 10 years. The major objective of the research is to examine whether and to what extent local institutional influences have overshadowed global market forces in shaping strategic management decisions and employment relations practices in Hong Kong, with a view to drawing implications to comparative employment relations studies. Examining the changing pattern of employment relations, this paper finds the industrial relations strategic choice model (Kochan, McKersie and Cappelli, 1984) appropriate to use as a framework for the study. Four broad aspects of employment relations are examined, including work organisation and job design; staffing and labour adjustment; performance appraisal, compensation and employee development; and labour unions and employment relations. Changes in the employment relations practices in two multinational retail banks operated in Hong Kong are examined in detail. The retail banking sector in Hong Kong is chosen as a case to examine as it is a highly competitive segment in the financial service industry very much susceptible to global market influences. This is well illustrated by the fact that Hong Kong was hit hard by both the Asian and the Global Financial Crises. This sector is also subject to increasing institutional influences, especially after the return of Hong Kong’s sovereignty to the People’s Republic of China (PRC) since 1997. The case study method is used as it is a suitable research design able to capture the complex institutional and environmental context which is the subject-matter to be examined in the paper. The paper concludes that operation of the retail banks in Hong Kong has been subject to both institutional and global market changes at different points in time. Information obtained from the two cases examined tends to support the conclusion that the relative significance of institutional as against global market factors in influencing retail banks’ operation and their employment relations practices is depended very much on the time in which these influences emerged and the scale and intensity of these influences. This case study highlights the importance of placing comparative employment relations studies within a context where employment relations practices in different countries or different regions/cities within the same country could be examined and compared over a longer period of time to make the comparison more meaningful.

Keywords: employment relations, institutional influences, global market forces, strategic management decisions, retail banks, Hong Kong

Procedia PDF Downloads 398
1282 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 121
1281 Inpatient Glycemic Management Strategies and Their Association with Clinical Outcomes in Hospitalized SARS-CoV-2 Patients

Authors: Thao Nguyen, Maximiliano Hyon, Sany Rajagukguk, Anna Melkonyan

Abstract:

Introduction: Type 2 Diabetes is a well-established risk factor for severe SARS-CoV-2 infection. Uncontrolled hyperglycemia in patients with established or newly diagnosed diabetes is associated with poor outcomes, including increased mortality and hospital length of stay. Objectives: Our study aims to compare three different glycemic management strategies and their association with clinical outcomes in patients hospitalized for moderate to severe SARS-CoV-2 infection. Identifying optimal glycemic management strategies will improve the quality of patient care and improve their outcomes. Method: This is a retrospective observational study on patients hospitalized at Adventist Health White Memorial with severe SARS-CoV-2 infection from 11/1/2020 to 02/28/2021. The following inclusion criteria were used: positive SARS-CoV-2 PCR test, age >18 yrs old, diabetes or random glucose >200 mg/dL on admission, oxygen requirement >4L/min, and treatment with glucocorticoids. Our exclusion criteria included: ICU admission within 24 hours, discharge within five days, death within five days, and pregnancy. The patients were divided into three glycemic management groups: Group 1, managed solely by the Primary Team, Group 2, by Pharmacy; and Group 3, by Endocrinologist. Primary outcomes were average glucose on Day 5, change in glucose between Days 3 and 5, and average insulin dose on Day 5 among groups. Secondary outcomes would be upgraded to ICU, inpatient mortality, and hospital length of stay. For statistics, we used IBM® SPSS, version 28, 2022. Results: Most studied patients were Hispanic, older than 60, and obese (BMI >30). It was the first CV-19 surge with the Delta variant in an unvaccinated population. Mortality was markedly high (> 40%) with longer LOS (> 13 days) and a high ICU transfer rate (18%). Most patients had markedly elevated inflammatory markers (CRP, Ferritin, and D-Dimer). These, in combination with glucocorticoids, resulted in severe hyperglycemia that was difficult to control. Average glucose on Day 5 was not significantly different between groups primary vs. pharmacy vs. endocrine (220.5 ± 63.4 vs. 240.9 ± 71.1 vs. 208.6 ± 61.7 ; P = 0.105). Change in glucose from days 3 to 5 was not significantly different between groups but trended towards favoring the endocrinologist group (-26.6±73.6 vs. 3.8±69.5 vs. -32.2±84.1; P= 0.052). TDD insulin was not significantly different between groups but trended towards higher TDD for the endocrinologist group (34.6 ± 26.1 vs. 35.2 ± 26.4 vs. 50.5 ± 50.9; P=0.054). The endocrinologist group used significantly more preprandial insulin compared to other groups (91.7% vs. 39.1% vs. 65.9% ; P < 0.001). The pharmacy used more basal insulin than other groups (95.1% vs. 79.5% vs. 79.2; P = 0.047). There were no differences among groups in the clinical outcomes: LOS, ICU upgrade, or mortality. Multivariate regression analysis controlled for age, sex, BMI, HbA1c level, renal function, liver function, CRP, d-dimer, and ferritin showed no difference in outcomes among groups. Conclusion: Given high-risk factors in our population, despite efforts from the glycemic management teams, it’s unsurprising no differences in clinical outcomes in mortality and length of stay.

Keywords: glycemic management, strategies, hospitalized, SARS-CoV-2, outcomes

Procedia PDF Downloads 447
1280 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 603
1279 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period

Authors: Xu Wang

Abstract:

This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.

Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty

Procedia PDF Downloads 165
1278 Festival Gamification: Conceptualization and Scale Development

Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching

Abstract:

Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.

Keywords: festival gamification, festival tourism, scale development, self-determination theory

Procedia PDF Downloads 144
1277 Authentic Connection between the Deity and the Individual Human Being Is Vital for Psychological, Biological, and Social Health

Authors: Sukran Karatas

Abstract:

Authentic energy network interrelations between the Creator and the creations as well as from creations to creations are the most important points for the worlds of physics and metaphysic to unite together and work in harmony, both within human beings, on the other hand, have the ability to choose their own life style voluntarily. However, it includes the automated involuntary spirit, soul and body working systems together with the voluntary actions, which involve personal, cultural and universal, rational or irrational variable values. Therefore, it is necessary for human beings to know the methods of existing authentic energy network connections to be able to communicate correlate and accommodate the physical and metaphysical entities as a proper functioning unity; this is essential for complete human psychological, biological and social well-being. Authentic knowledge is necessary for human beings to verify the position of self within self and with others to regulate conscious and voluntary actions accordingly in order to prevent oppressions and frictions within self and between self and others. Unfortunately, the absence of genuine individual and universal basic knowledge about how to establish an authentic energy network connection within self, with the deity and the environment is the most problematic issue even in the twenty-first century. The second most problematic issue is how to maintain freedom, equality and justice among human beings during these strictly interwoven network connections, which naturally involve physical, metaphysical and behavioral actions of the self and the others. The third and probably the most complicated problem is the scientific identification and the authentication of the deity. This not only provides the whole power and control over the choosers to set their life orders but also to establish perfect physical and metaphysical links as fully coordinated functional energy network. This thus indicates that choosing an authentic deity is the key-point that influences automated, emotional, and behavioral actions altogether, which shapes human perception, personal actions, and life orders. Therefore, we will be considering the existing ‘four types of energy wave end boundary behaviors’, comprising, free end, fixed end boundary behaviors, as well as boundary behaviors from denser medium to less dense medium and from less dense medium to denser medium. Consequently, this article aims to demonstrate that the authentication and the choice of deity has an important effect on individual psychological, biological and social health. It is hoped that it will encourage new researches in the field of authentic energy network connections to establish the best position and the most correct interrelation connections with self and others without violating the authorized orders and the borders of one another to live happier and healthier lives together. In addition, the book ‘Deity and Freedom, Equality, Justice in History, Philosophy, Science’ has more detailed information for those interested in this subject.

Keywords: deity, energy network, power, freedom, equality, justice, happiness, sadness, hope, fear, psychology, biology, sociology

Procedia PDF Downloads 343
1276 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms

Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan

Abstract:

With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.

Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves

Procedia PDF Downloads 187
1275 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 152
1274 Single-Parent Families and Its Impact on the Psycho Child Development in Schools

Authors: Sylvie Sossou, Grégoire Gansou, Ildevert Egue

Abstract:

Introduction: The mission of the family and the school is to educate and train citizens of the city. But the family’s values , parental roles, respect for life collapse in their traditional African form. Indeed laxity with regard to divorce, liberal ideas about child rearing influence the emotional life of the latter. Several causes may contribute to the decline in academic performance. In order to seek a psychological solution to the issue, a study was conducted in 6 schools at the 9th district in Cotonou, cosmopolitan city of Benin. Objective: To evaluate the impact of single parenthood on the psycho child development. Materials and Methods: Questionnaires and interviews were used to gather verbal information. The questionnaires were administered to parents and children (schoolchildren 4, 5 and six form) from 7 to 12 years in lone parenthood. The interview was done with teachers and school leaders. We identified 209 cases of children living with a "single-parent" and 68 single parents. Results: Of the 209 children surveyed the results showed that 116 children are cut relational triangle in early childhood (before 3 years). The psychological effects showed that the separation has caused sadness for 52 children, anger 22, shame 17, crying at 31 children, fear for 14, the silence at 58 children. In front of complete family’s children, these children experience feelings of aggression in 11.48%; sadness in 30.64%; 5.26% the shame, the 6.69% tears; jealousy in 2.39% and 2.87% of indifference. The option to get married in 44.15% of children is a challenge to want to give a happy childhood for their offspring; 22.01% feel rejected, there is uncertainty for 11.48% of cases and 25.36% didn’t give answer. 49, 76% of children want to see their family together; 7.65% are against to avoid disputes and in many cases to save the mother of the father's physical abuse. 27.75% of the ex-partners decline responsibility in the care of the child. Furthermore family difficulties affecting the intellectual capacities of children: 37.32% of children see school difficulties related to family problems despite all the pressure single-parent to see his child succeed. Single parenthood affects inter-family relations: pressure 33.97%; nervousness 24.88%; overprotection 29.18%; backbiting 11.96%, are the lives of these families. Conclusion: At the end of the investigation, results showed that there is a causal relationship between psychological disorders, academic difficulties of children and quality of parental relationships. Other cases may exist, but the lack of resources meant that we have only limited at 6 schools. Early psychological treatment for these children is needed.

Keywords: single-parent, psycho child, school, Cotonou

Procedia PDF Downloads 387
1273 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon

Authors: Franck Acho

Abstract:

This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.

Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia

Procedia PDF Downloads 55
1272 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.

Keywords: HbA1C, T2DM, SBP, FBS

Procedia PDF Downloads 7
1271 Students’ Opinions Related to Virtual Classrooms within the Online Distance Education Graduate Program

Authors: Secil Kaya Gulen

Abstract:

Face to face and virtual classrooms that came up with different conditions and environments, but similar purposes have different characteristics. Although virtual classrooms have some similar facilities with face-to-face classes such as program, students, and administrators, they have no walls and corridors. Therefore, students can attend the courses from a distance and can control their own learning spaces. Virtual classrooms defined as simultaneous online environments where students in different places come together at the same time with the guidance of a teacher. Distance education and virtual classes require different intellectual and managerial skills and models. Therefore, for effective use of virtual classrooms, the virtual property should be taken into consideration. One of the most important factors that affect the spread and effective use of the virtual classrooms is the perceptions and opinions of students -as one the main participants-. Student opinions and recommendations are important in terms of providing information about the fulfillment of expectation. This will help to improve the applications and contribute to the more efficient implementations. In this context, ideas and perceptions of the students related to the virtual classrooms, in general, were determined in this study. Advantages and disadvantages of virtual classrooms expected contributions to the educational system and expected characteristics of virtual classrooms have examined in this study. Students of an online distance education graduate program in which all the courses offered by virtual classrooms have asked for their opinions. Online Distance Education Graduate Program has totally 19 students. The questionnaire that consists of open-ended and multiple choice questions sent to these 19 students and finally 12 of them answered the questionnaire. Analysis of the data presented as frequencies and percentages for each item. SPSS for multiple-choice questions and Nvivo for open-ended questions were used for analyses. According to the results obtained by the analysis, participants stated that they did not get any training on virtual classes before the courses; but they emphasize that newly enrolled students should be educated about the virtual classrooms. In addition, all participants mentioned that virtual classroom contribute their personal development and they want to improve their skills by gaining more experience. The participants, who mainly emphasize the advantages of virtual classrooms, express that the dissemination of virtual classrooms will contribute to the Turkish Education System. Within the advantages of virtual classrooms, ‘recordable and repeatable lessons’ and ‘eliminating the access and transportation costs’ are most common advantages according to the participants. On the other hand, they mentioned ‘technological features and keyboard usage skills affect the attendance’ is the most common disadvantage. Participants' most obvious problem during virtual lectures is ‘lack of technical support’. Finally ‘easy to use’, ‘support possibilities’, ‘communication level’ and ‘flexibility’ come to the forefront in the scope of expected features of virtual classrooms. Last of all, students' opinions about the virtual classrooms seems to be generally positive. Designing and managing virtual classrooms according to the prioritized features will increase the students’ satisfaction and will contribute to improve applications that are more effective.

Keywords: distance education, virtual classrooms, higher education, e-learning

Procedia PDF Downloads 268
1270 Critical Core Skills Profiling in the Singaporean Workforce

Authors: Bi Xiao Fang, Tan Bao Zhen

Abstract:

Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.

Keywords: employability, skills profiling, skills measurement, soft skills

Procedia PDF Downloads 93
1269 Determinants of Walking among Middle-Aged and Older Overweight and Obese Adults: Demographic, Health, and Socio-Environmental Factors

Authors: Samuel N. Forjuoh, Marcia G. Ory, Jaewoong Won, Samuel D. Towne, Suojin Wang, Chanam Lee

Abstract:

The public health burden of obesity is well established as is the influence of physical activity (PA) on the health and wellness of individuals who are obese. This study examined the influence of selected demographic, health, and socioenvironmental factors on the walking behaviors of middle-aged and older overweight and obese adults. Online and paper surveys were administered to community-dwelling overweight and obese adults aged ≥ 50 years residing in four cities in central Texas and seen by a family physician in the primary care clinic from October 2013 to June 2014. Descriptive statistics were used to characterize participants’ anthropometric and demographic data as well as their health conditions and walking, socioenvironmental, and more broadly defined PA behaviors. Then Pearson chi-square tests were used to assess differences between participants who reported walking the recommended ≥ 150 minutes for any purpose in a typical week as a proxy to meeting the U.S. Centers for Disease Control and Prevention’s PA guidelines and those who did not. Finally, logistic regression was used to predict walking the recommended ≥ 150 minutes for any purpose, controlling for covariates. The analysis was conducted in 2016. Of the total sample (n=253, survey response rate of 6.8%), the majority were non-Hispanic white (81.7%), married (74.5%), male (53.5%), and reported an annual household income of ≥ $50,000 (65.7%). Approximately, half were employed (49.6%), or had at least a college degree (51.8%). Slightly more than 1 in 5 (n=57, 22.5%) reported walking the recommended ≥150 minutes for any purpose in a typical week. The strongest predictors of walking the recommended ≥ 150 minutes for any purpose in a typical week in adjusted analysis were related to education and a high favorable perception of the neighborhood environment. Compared to those with a high school diploma or some college, participants with at least a college degree were five times as likely to walk the recommended ≥ 150 minutes for any purpose (OR=5.55, 95% CI=1.79-17.25). Walking the recommended ≥ 150 minutes for any purpose was significantly associated with participants who disagreed that there were many distracted drivers (e.g., on the cell phone while driving) in their neighborhood (OR=4.08, 95% CI=1.47-11.36) and those who agreed that there are sidewalks or protected walkways (e.g., walking trails) in their neighborhood (OR=3.55, 95% CI=1.10-11.49). Those employed were less likely to walk the recommended ≥ 150 minutes for any purpose compared to those unemployed (OR=0.31, 95% CI=0.11-0.85) as were those who reported some difficulty walking for a quarter of a mile (OR=0.19, 95% CI=0.05-0.77). Other socio-environmental factors such as having care-giver responsibilities for elders, someone to walk with, or a dog in the household as well as Walk Score™ were not significantly associated with walking the recommended ≥ 150 minutes for any purpose in a typical week. Neighborhood perception appears to be an important factor associated with the walking behaviors of middle-aged and older overweight and obese individuals. Enhancing the neighborhood environment (e.g., providing walking trails) may promote walking among these individuals.

Keywords: determinants of walking, obesity, older adults, physical activity

Procedia PDF Downloads 258
1268 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 362
1267 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA

Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak

Abstract:

The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.

Keywords: innovation, management, medicine, e-health, artificial intelligence

Procedia PDF Downloads 19
1266 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 348
1265 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 489
1264 Development of a Culturally Safe Wellbeing Intervention Tool for and with the Inuit in Quebec

Authors: Liliana Gomez Cardona, Echo Parent-Racine, Joy Outerbridge, Arlene Laliberté, Outi Linnaranta

Abstract:

Suicide rates among Inuit in Nunavik are six to eleven times larger than the Canadian average. The colonization, religious missions, residential schools as well as economic and political marginalization are factors that have challenged the well-being and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous peoples because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Inuit in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Inuit population. Qualitative, collaborative, and participatory action research project which respects First Nations and Inuit protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Inuit. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Inuit in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in resilience and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Inuit population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.

Keywords: cultural adaptation, cultural safety, empowerment, Inuit, mental health, Nunavik, resiliency

Procedia PDF Downloads 118
1263 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 98
1262 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 64