Search results for: auto.arima
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 360

Search results for: auto.arima

90 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 32
89 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome

Authors: Geetanjali Sharma

Abstract:

Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.

Keywords: brainstem, demyelination, evoked potential, Guillain Barre’

Procedia PDF Downloads 275
88 The Role of a Novel DEAD-Box Containing Protein in NLRP3 Inflammasome Activation

Authors: Yi-Hui Lai, Chih-Hsiang Yang, Li-Chung Hsu

Abstract:

The inflammasome is a protein complex that modulates caspase-1 activity, resulting in proteolytic cleavage of proinflammatory cytokines such as IL-1β and IL-18, into their bioactive forms. It has been shown that the inflammasomes play a crucial role in the clearance of pathogenic infection and tissue repair. However, dysregulated inflammasome activation contributes to a wide range of human diseases such as cancers and auto-inflammatory diseases. Yet, regulation of NLRP3 inflammasome activation remains largely unknown. We discovered a novel DEAD box protein, whose biological function has not been reported, not only negatively regulates NLRP3 inflammasome activation by interfering NLRP3 inflammasome assembly and cellular localization but also mitigate pyroptosis upon pathogen evasion. The DEAD-box protein is the first DEAD-box protein gets involved in modulation of the inflammasome activation. In our study, we found that caspase-1 activation and mature IL-1β production were largely enhanced upon LPS challenge in the DEAD box-containing protein- deleted THP-1 macrophages and bone marrow-derived macrophages (BMDMs). In addition, this DEAD box-containing protein migrates from the nucleus to the cytoplasm upon LPS stimulation, which is required for its inhibitory role in NLRP3 inflammasome activation. The DEAD box-containing protein specifically interacted with the LRR motif of NLRP3 via its DEAD domain. Furthermore, due to the crucial role of the NLRP3 LRR domain in the recruitment of NLRP3 to mitochondria and binding to its adaptor ASC, we found that the interaction of NLRP3 and ASC was downregulated in the presence of the DEAD box-containing protein. In addition to the mechanical study, we also found that this DEAD box protein protects host cells from inflammasome-triggered cell death in response to broad-ranging pathogens such as Candida albicans, Streptococcus pneumoniae, etc., involved in nosocomial infections and severe fever shock. Collectively, our results suggest that this novel DEAD box molecule might be a key therapeutic strategy for various infectious diseases.

Keywords: inflammasome, inflammation, innate immunity, pyroptosis

Procedia PDF Downloads 256
87 Economic Growth: The Nexus of Oil Price Volatility and Renewable Energy Resources among Selected Developed and Developing Economies

Authors: Muhammad Siddique, Volodymyr Lugovskyy

Abstract:

This paper explores how nations might mitigate the unfavorable impacts of oil price volatility on economic growth by switching to renewable energy sources. The impacts of uncertain factor prices on economic activity are examined by looking at the Realized Volatility (RV) of oil prices rather than the more traditional method of looking at oil price shocks. The United States of America (USA), China (C), India (I), United Kingdom (UK), Germany (G), Malaysia (M), and Pakistan (P) are all included to round out the traditional literature's examination of selected nations, which focuses on oil-importing and exporting economies. Granger Causality Tests (GCT), Impulse Response Functions (IRF), and Variance Decompositions (VD) demonstrate that in a Vector Auto-Regressive (VAR) scenario, the negative impacts of oil price volatility extend beyond what can be explained by oil price shocks alone for all of the nations in the sample. Different nations have different levels of vulnerability to changes in oil prices and other factors that may play a role in a sectoral composition and the energy mix. The conventional method, which only takes into account whether a country is a net oil importer or exporter, is inadequate. The potential economic advantages of initiatives to decouple the macroeconomy from volatile commodities markets are shown through simulations of volatility shocks in alternative energy mixes (with greater proportions of renewables). It is determined that in developing countries like Pakistan, increasing the use of renewable energy sources might lessen an economy's sensitivity to changes in oil prices; nonetheless, a country-specific study is required to identify particular policy actions. In sum, the research provides an innovative justification for mitigating economic growth's dependence on stable oil prices in our sample countries.

Keywords: oil price volatility, renewable energy, economic growth, developed and developing economies

Procedia PDF Downloads 55
86 Plasma Engineered Nanorough Substrates for Stem Cells in vitro Culture

Authors: Melanie Macgregor-Ramiasa, Isabel Hopp, Patricia Murray, Krasimir Vasilev

Abstract:

Stem cells based therapies are one of the greatest promises of new-age medicine due to their potential to help curing most dreaded conditions such as cancer, diabetes and even auto-immune disease. However, establishing suitable in vitro culture materials allowing to control the fate of stem cells remain a challenge. Amongst the factor influencing stem cell behavior, substrate chemistry and nanotopogaphy are particularly critical. In this work, we used plasma assisted surface modification methods to produce model substrates with tailored nanotopography and controlled chemistry. Three different sizes of gold nanoparticles were bound to amine rich plasma polymer layers to produce homogeneous and gradient surface nanotopographies. The outer chemistry of the substrate was kept constant for all substrates by depositing a thin layer of our patented biocompatible polyoxazoline plasma polymer on top of the nanofeatures. For the first time, protein adsorption and stem cell behaviour (mouse kidney stem cells and mesenchymal stem cells) were evaluated on nanorough plasma deposited polyoxazoline thin films. Compared to other nitrogen rich coatings, polyoxazoline plasma polymer supports the covalent binding of proteins. Moderate surface nanoroughness, in both size and density, triggers cell proliferation. In association with polyoxazoline coating, cell proliferation is further enhanced on nanorough substrates. Results are discussed in term of substrates wetting properties. These findings provide valuable insights on the mechanisms governing the interactions between stem cells and their growth support.

Keywords: nanotopography, stem cells, differentiation, plasma polymer, oxazoline, gold nanoparticles

Procedia PDF Downloads 246
85 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development

Procedia PDF Downloads 217
84 Sustainable Approach to Fabricate Titanium Nitride Film on Steel Substrate by Using Automotive Plastics Waste

Authors: Songyan Yin, Ravindra Rajarao, Veena Sahajwalla

Abstract:

Automotive plastics waste (widely known as auto-fluff or ASR) is a complicated mixture of various plastics incorporated with a wide range of additives and fillers like titanium dioxide, magnesium oxide, and silicon dioxide. Automotive plastics waste is difficult to recycle and its landfilling poses the significant threat to the environment. In this study, a sustainable technology to fabricate protective nanoscale TiN thin film on a steel substrate surface by using automotive waste plastics as titanium and carbon resources is suggested. When heated automotive plastics waste with steel at elevated temperature in a nitrogen atmosphere, titanium dioxide contented in ASR undergo carbothermal reduction and nitridation reactions on the surface of the steel substrate forming a nanoscale thin film of titanium nitride on the steel surface. The synthesis of TiN film on steel substrate under this technology was confirmed by X-ray photoelectron spectrometer, high resolution X-ray diffraction, field emission scanning electron microscope, a high resolution transmission electron microscope fitted with energy dispersive X-ray spectroscopy, and inductively coupled plasma mass spectrometry techniques. This sustainably fabricated TiN film was verified of dense, well crystallized and could provide good oxidation resistance to the steel substrate. This sustainable fabrication technology is maneuverable, reproducible and of great economic and environmental benefit. It not only reduces the fabrication cost of TiN coating on steel surface, but also provides a sustainable environmental solution to recycling automotive plastics waste. Moreover, high value copper droplets and char residues were also extracted from this unique fabrication process.

Keywords: automotive plastics waste, carbonthermal reduction and nitirdation, sustainable, TiN film

Procedia PDF Downloads 363
83 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 54
82 Macroeconomic Policy Coordination and Economic Growth Uncertainty in Nigeria

Authors: Ephraim Ugwu, Christopher Ehinomen

Abstract:

Despite efforts by the Nigerian government to harmonize the macroeconomic policy implementations by establishing various committees to resolve disputes between the fiscal and monetary authorities, it is still evident that the federal government had continued its expansionary policy by increasing spending, thus creating huge budget deficit. This study evaluates the effect of macroeconomic policy coordination on economic growth uncertainty in Nigeria from 1980 to 2020. Employing the Auto regressive distributed lag (ARDL) bound testing procedures, the empirical results shows that the error correction term, ECM(-1), indicates a negative sign and is significant statistically with the t-statistic value of (-5.612882 ). Therefore, the gap between long run equilibrium value and the actual value of the dependent variable is corrected with speed of adjustment equal to 77% yearly. The long run coefficient results showed that the estimated coefficients of the intercept term indicates that other things remains the same (ceteris paribus), the economics growth uncertainty will continue reduce by 7.32%. The coefficient of the fiscal policy variable, PUBEXP, indicates a positive sign and significant statistically. This implies that as the government expenditure increases by 1%, economic growth uncertainty will increase by 1.67%. The coefficient of monetary policy variable MS also indicates a positive sign and insignificant statistically. The coefficients of merchandise trade variable, TRADE and exchange rate EXR show negative signs and significant statistically. This indicate that as the country’s merchandise trade and the rate of exchange increases by 1%, the economic growth uncertainty reduces by 0.38% and 0.06%, respectively. This study, therefore, advocate for proper coordination of monetary, fiscal and exchange rate policies in order to actualize the goal of achieving a stable economic growth.

Keywords: macroeconomic, policy coordination, growth uncertainty, ARDL, Nigeria

Procedia PDF Downloads 57
81 LaMn₁₋ₓNiₓO₃ Perovskites as Oxygen Carriers for Chemical Looping Partial Oxidation of Methane

Authors: Xianglei Yin, Shen Wang, Baoyi Wang, Laihong Shen

Abstract:

Chemical looping partial oxidation of methane (CLPOM) is a novel technology to produce high-quality syngas with an auto-thermic process and low equipment investment. The development of oxygen carriers is important for the improvement of the CLPOM performance. In this work, the effect of the nickel-substitution proportion on the performance of LaMn₁₋ᵧNiᵧO₃₊δ perovskites for CLPOM was studied in the aspect of reactivity, syngas selectivity, resistance towards carbon deposition and thermal stability in cyclic redox process. The LaMn₁₋ₓNiₓO₃ perovskite oxides with x = 0, 0.1, 0.2 were prepared by the sol-gel method. The performance of LaMn₁₋ᵧNiᵧO₃₊δ perovskites for CLPOM was investigated through the characterization of XRD, H₂-TPR, XPS, and fixed-bed experiments. The characterization and test results suggest that the doping of nickel enhances the generation rate of syngas, leading to high syngas yield, methane conversion, and syngas selectivity. This is attributed to the that the introduction of nickel provides active sites to promote the methane activation on the surface and causes the addition of oxygen vacancies to accelerate the migration of oxygen anion in the bulk of oxygen carrier particles. On the other hand, the introduction of nickel causes carbon deposition to occur earlier. The best substitution proportion of nickel is y=0.1 and LaMn₀.₉Ni₀.₁O₃₊δ could produce high-quality syngas with a yield of 3.54 mmol·g⁻¹, methane conversion of 80.7%, and CO selectivity of 84.8% at 850℃. In addition, the LaMn₀.₉Ni₀.₁O₃₊δ oxygen carrier exhibits superior and stable performance in the cyclic redox process.

Keywords: chemical looping partial oxidation of methane, LaMnO₃₊δ, Ni doping, syngas, carbon deposition

Procedia PDF Downloads 70
80 Compactness and Quality of Life: Applying Regression Analysis on American Cities

Authors: Hsi-Chuan Wang, Hongxi Yin

Abstract:

Compactness has been proposed as a type of sustainable urban form globally. However, the meanings and the measurements might diverse in regarding to the varying interpretation; moreover, since compactness was proposed to eliminate auto culture and urban sprawl in the developed countries, voices have emerged asking to rethink the suitability of compactness in the developing countries – based upon such understanding, Quality of Life (QOL) has been suggested as a good way to show the overall benefit of compactness. In regarding to such background, two subjects were targeted for discussion in this paper: (I) the meaning and feasibility of compactness between the developing and developed countries, and (II) the interaction between compactness and QOL. This paper argues that compactness should not be considered a universal principle for cities of all kind, but rather an ideal concept for urban designer and planner to consider throughout local practices. It firstly reviewed the benefits of both compactness and sprawl to uncover the features behind these urban forms, and later addressed the meaning and difficulty of adopting compactness in both the developing and developed countries. Secondly, arguing compactness to be positioned as a ‘process’ along the transition from the developing countries to the developed ones, this paper applied both cross-sectional and longitudinal analysis to uncover (I) the relationship between compactness and QOL in regarding to 30 American cities and (II) the impact of ‘becoming compact’ on QOL in regarding to 8 identified American Urbanized Areas (UZAs). The findings indicated that higher compactness could link to lower QOL among the compact cities, but with higher QOL among the sprawl cities. In addition, based upon the comparison between 2000 and 2010 on 8 UZAs, their QOL have escalated during the transition from the sprawl areas into the compact ones, but the extent of improvement in QOL could differ greatly among areas. In regarding to our findings, compact development should be proposed as a general guideline leading the contemporary sprawl cities in transition with sustainable urbanism; however, to prevent the externalities from damaging QOL with over-compactness, the compact policy should be flexible to adjust a long-term roadmap for sustainable development.

Keywords: compactness, quality of life, sprawl, sustainable urbanism

Procedia PDF Downloads 132
79 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 94
78 Assesments of Some Environment Variables on Fisheries at Two Levels: Global and Fao Major Fishing Areas

Authors: Hyelim Park, Juan Martin Zorrilla

Abstract:

Climate change influences very widely and in various ways ocean ecosystem functioning. The consequences of climate change on marine ecosystems are an increase in temperature and irregular behavior of some solute concentrations. These changes would affect fisheries catches in several ways. Our aim is to assess the quantitative contribution change of fishery catches along the time and express them through four environment variables: Sea Surface Temperature (SST4) and the concentrations of Chlorophyll (CHL), Particulate Inorganic Carbon (PIC) and Particulate Organic Carbon (POC) at two spatial scales: Global and the nineteen FAO Major Fishing Areas divisions. Data collection was based on the FAO FishStatJ 2014 database as well as MODIS Aqua satellite observations from 2002 to 2012. Some data had to be corrected and interpolated using some existing methods. As the results, a multivariable regression model for average Global fisheries captures contained temporal mean of SST4, standard deviation of SST4, standard deviation of CHL and standard deviation of PIC. Global vector auto-regressive (VAR) model showed that SST4 was a statistical cause of global fishery capture. To accommodate varying conditions in fishery condition and influence of climate change variables, a model was constructed for each FAO major fishing area. From the management perspective it should be recognized some limitations of the FAO marine areas division that opens to possibility to the discussion of the subdivision of the areas into smaller units. Furthermore, it should be treated that the contribution changes of fishery species and the possible environment factor for specific species at various scale levels.

Keywords: fisheries-catch, FAO FishStatJ, MODIS Aqua, sea surface temperature (SST), chlorophyll, particulate inorganic carbon (PIC), particulate organic carbon (POC), VAR, granger causality

Procedia PDF Downloads 458
77 Multiple Organ Manifestation in Neonatal Lupus Erythematous: Report of Two Cases

Authors: A. Lubis, R. Widayanti, Z. Hikmah, A. Endaryanto, A. Harsono, A. Harianto, R. Etika, D. K. Handayani, M. Sampurna

Abstract:

Neonatal lupus erythematous (NLE) is a rare disease marked by clinical characteristic and specific maternal autoantibody. Many cutaneous, cardiac, liver, and hematological manifestations could happen with affect of one organ or multiple. In this case, both babies were premature, low birth weight (LBW), small for gestational age (SGA) and born through caesarean section from a systemic lupus erythematous (SLE) mother. In the first case, we found a baby girl with dyspnea and grunting. Chest X ray showed respiratory distress syndrome (RDS) great I and echocardiography showed small atrial septal defect (ASD) and ventricular septal defect (VSD). She also developed anemia, thrombocytopenia, elevated C-reactive protein, hypoalbuminemia, increasing coagulation factors, hyperbilirubinemia, and positive blood culture of Klebsiella pneumonia. Anti-Ro/SSA and Anti-nRNP/sm were positive. Intravenous fluid, antibiotic, transfusion of blood, thrombocyte concentrate, and fresh frozen plasma were given. The second baby, male presented with necrotic tissue on the left ear and skin rashes, erythematous macula, athropic scarring, hyperpigmentation on all of his body with various size and facial haemorrhage. He also suffered from thrombocytopenia, mild elevated transaminase enzyme, hyperbilirubinemia, anti-Ro/SSA was positive. Intravenous fluid, methyprednisolone, intravenous immunoglobulin (IVIG), blood, and thrombocyte concentrate transfution were given. Two cases of neonatal lupus erythematous had been presented. Diagnosis based on clinical presentation and maternal auto antibody on neonate. Organ involvement in NLE can occur as single or multiple manifestations.

Keywords: neonatus lupus erythematous, maternal autoantibody, clinical characteristic, multiple organ manifestation

Procedia PDF Downloads 393
76 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 105
75 The Influence of Bacteriocins Producing Lactic Acid Bacteria Multiplied in an Alternative Substrate on Calves Blood Parameters

Authors: E. Bartkiene, V. Krungleviciute, J. Kucinskiene, R. Antanaitis, A. Kucinskas

Abstract:

In calves less than 10-day-old, infection commonly cause severe diarrhoea and high mortality. To prevention of calves diseases a common practice is to treat calves with prophylactic antibiotics, in this case the use of lactic acid bacteria (LAB) is promising. Often LAB strains are incubated in comercial de Man-Rogosa-Sharpe (MRS) medium, the culture are centrifuged, the cells are washing with sterile water, and this suspension is used as a starter culture for animal health care. Juice of potatoe tubers is industrial wastes, wich may constitute a source of digestible nutrients for microorganisms. In our study the ability of LAB to utilize potatoe tubers juice in cell synthesis without external nutrient supplement was investigated, and the influence of multiplied LAB on calves blood parameters was evaluated. Calves were selected based on the analogy principle (treatment group (n=6), control group (n=8)). For the treatment group 14 days was given a 50 ml of fermented potatoe tubers juice containing 9.6 log10 cfu/ml of LAB. Blood parameters (gas and biochemical) were assessed by use of an auto-analyzers (Hitachi 705 and EPOC). Before the experiment, blood pH of treatment group calves was 7.33, control – 7.36, whereas, after 14 days, 7.28 and 7.36, respectively. Calves blood pH in the treatment group remained stable over the all experiment period. Concentration of PCO2 in control calves group blood increased from 63.95 to 70.93, whereas, in the treatment group decreased from 63.08 to 60.71. Concentration of lactate in the treatment group decreased from 3.20 mmol/l to 2.64 mmol/l, whereas, in control - increased from 3.95 mmol/l to 4.29 mmol/l. Concentration of AST in the control calves group increased from 50.18 IU/L to 58.9 IU/L, whereas, in treatment group decreased from 49.82 IU/L to 33.1 IU/L. We conclude that the 50 ml of fermented potatoe tubers juice containing 9.6 log10 cfu/ml of LAB per day, by using 14 days, reduced risk of developing acidosis (stabilizes blood pH (p < 0.05)), reduces lactates and PCO2 concentration (p < 0.05) and risk of liver lesions (reduces AST concentration (p < 0.005)) in blood of calves.

Keywords: alternative substrate, blood parameters, calves, lactic acid bacteria

Procedia PDF Downloads 284
74 Implications of Humanizing Pedagogy on Learning Design in a Technology-Enhanced Language Learning Environment: Critical Reflections on Student Identity and Agency

Authors: Mukhtar Raban

Abstract:

Nelson Mandela University subscribes to a humanizing pedagogy (HP), as housed under broader critical pedagogy, that underpins and informs learning and teaching activities at the institution. The investigation sought to explore the implications of humanizing and critical pedagogical considerations for a technology-enhanced language learning (TELL) environment in a university course. The paper inquires into the design of a learning resource in an online learning environment of an English communication module, that applied HP principles. With an objective of creating agentive spaces for foregrounding identity, student voice, critical self-reflection, and recognition of others’ humanity; a flexible and open 'My Presence' feature was added to the TELL environment that allowed students and lecturers to share elements of their backgrounds in a ‘mutually vulnerable’ manner as a way of establishing digital identity and a more ‘human’ presence in the online language learning encounter, serving as a catalyst for the recognition of the ‘other’. Following a qualitative research design, the study adopted an auto-ethnographic approach, complementing the critical inquiry nature embedded into the activity’s practices. The study’s findings provide critical reflections and deductions on the possibilities of leveraging digital human expression within a humanizing pedagogical framework to advance the realization of HP-adoption in language learning and teaching encounters. It was found that the consideration of humanizing pedagogical principles in the design of online learning was more effective when the critical outcomes were explicated to students and lecturers prior to the completion of the activities. The integration of humanizing pedagogy also led to a contextual advancement of ‘affective’ language learning. Upon critical reflection and analysis, student identity and agency can flourish in a technology-enhanced learning environment when humanizing, and critical pedagogy influences the learning design.

Keywords: critical reflection, humanizing pedagogy, student identity, technology-enhanced language learning

Procedia PDF Downloads 105
73 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System

Authors: June-Jei Kuo, Yi-Chuan Hsieh

Abstract:

Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.

Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library

Procedia PDF Downloads 73
72 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 131
71 Intrastromal Donor Limbal Segments Implantation as a Surgical Treatment of Progressive Keratoconus: Clinical and Functional Results

Authors: Mikhail Panes, Sergei Pozniak, Nikolai Pozniak

Abstract:

Purpose: To evaluate the effectiveness of intrastromal donor limbal segments implantation for treatment of progressive keratoconus considering on main characteristics of corneal endothelial cells. Setting: Outpatient ophthalmic clinic. Methods: Twenty patients (20 eyes) with progressive keratoconus II-III of Amsler classification were recruited. The worst eye was treated with the transplantation of donor limbal segments in the recipient corneal stroma, while the fellow eye was left untreated as a control of functional and morphological changes. Furthermore, twenty patients (20 eyes) without progressive keratoconus was used as a control of corneal endothelial cells changes. All patients underwent a complete ocular examination including uncorrected and corrected distance visual acuity (UDVA, CDVA), slit lamp examination fundus examination, corneal topography and pachymetry, auto-keratometry, Anterior Segment Optical Coherence Tomography and Corneal Endothelial Specular Microscopy. Results: After two years, statistically significant improvement in the UDVA and CDVA (on the average on two lines for UDVA and three-four lines for CDVA) were noted. Besides corneal astigmatism decreased from 5.82 ± 2.64 to 1.92 ± 1.4 D. Moreover there were no statistically significant differences in the changes of mean spherical equivalent, keratometry and pachymetry indicators. It should be noted that after two years there were no significant differences in the changes of the number and form of corneal endothelial cells. It can be regarded as a process stabilization. In untreated control eyes, there was a general trend towards worsening of UDVA, CDVA and corneal thickness, while corneal astigmatism was increased. Conclusion: Intrastromal donor segments implantation is a safe technique for keratoconus treatment. Intrastromal donor segments implantation is an efficient procedure to stabilize and improve progressive keratoconus.

Keywords: corneal endothelial cells, intrastromal donor limbal segments, progressive keratoconus, surgical treatment of keratoconus

Procedia PDF Downloads 246
70 A Distributed Mobile Agent Based on Intrusion Detection System for MANET

Authors: Maad Kamal Al-Anni

Abstract:

This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the  signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness  for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).

Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)

Procedia PDF Downloads 163
69 Adsorptive Membrane for Hemodialysis: Potential, Future Prospection and Limitation of MOF as Nanofillers

Authors: MUSAWIRA IFTIKHAR

Abstract:

The field of membrane materials is the most dynamic due to the constantly evolving requirements advancement of materials, to address challenges such as biocompatibility, protein-bound uremic toxins, blood coagulation, auto-immune responses, oxidative stress, and poor clearance of uremic toxins. Hemodialysis is a membrane filtration processes that is currently necessary for daily living of the patients with ESRD. Tens of millions of people with ESRD have benefited from hemodialysis over the past 60–70 years, both in terms of safeguarding life and a longer lifespan. Beyond challenges associated with the efficiency and separative properties of the membranes, ensuring hemocompatibility, or the safe circulation of blood outside the body for four hours every two days, remains a persistent challenge. This review explores the ongoing field of metal–Organic Frameworks (MOFs) and their applications in hemodialysis, offering a comprehensive examination of various MOFs employed to address challenges inherent in traditional hemodialysis methodologies. this This review included includes the experimental work done with various MOFs as a filler such as UiO-66, HKUST-1, MIL-101, and ZIF-8, which together lead to improved adsorption capacities for a range of uremic toxins and proteins. Furthermore, this review highlights how effectively MOF-based hemodialysis membranes remove a variety of uremic toxins, including p-cresol, urea, creatinine, and indoxyl sulfate and potential filler choices for the future. Future research efforts should focus on refining synthesis techniques, enhancing toxin selectivity, and investigating the long-term durability of MOF-based membranes. With these considerations, MOFs emerge as transformative materials in the quest to develop advanced and efficient hemodialysis technologies, holding the promise to significantly enhance patient outcomes and redefine the landscape of renal therapy.

Keywords: membrane, hemodailysis, metal organic frameworks, seperation, protein adsorbtion

Procedia PDF Downloads 10
68 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 353
67 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 92
66 Effective Survey Designing for Conducting Opinion Survey to Follow Participatory Approach in a Study of Transport Infrastructure Projects: A Case Study of the City of Kolkata

Authors: Jayanti De

Abstract:

Users of any urban road infrastructure may be classified into various categories. The current paper intends to see whether the opinions on different environmental and transportation criteria vary significantly among different types of road users or not. The paper addresses this issue by using a unique survey data that has been collected from Kolkata, a highly populated city in India. Multiple criteria should be taken into account while planning on infrastructure development programs. Given limited resources, a welfare maximizing government typically resorts to public opinion by designing surveys for prioritization of one project over another. Designing such surveys can be challenging and costly. Deciding upon whom to include in a survey and how to represent each group of consumers/road-users depend crucially on how opinion for different criteria vary across consumer groups. A unique dataset has been collected from various parts of Kolkata to statistically test (using Kolmogorov-Smirnov test) whether assigning of weights to rank the transportation criteria like congestion, air pollution, noise pollution, and morning/evening delay vary significantly across the various groups of users of such infrastructure. The different consumer/user groups in the dataset include pedestrian, private car owner, para-transit (taxi /auto rickshaw) user, public transport (bus) user and freight transporter among others. Very little evidence has been found that ranking of different criteria among these groups vary significantly. This also supports the hypothesis that road- users/consumers form their opinion by using their long-run rather than immediate experience. As a policy prescription, this implies that under-representation or over-representation of a specific consumer group in a survey may not necessarily distort the overall opinion, since opinions across different consumer groups are highly correlated as evident from this particular case study.

Keywords: multi criteria analysis, project-prioritization, road- users, survey designing

Procedia PDF Downloads 258
65 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications

Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna

Abstract:

Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.

Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality

Procedia PDF Downloads 92
64 Opportunities and Challenges for Decarbonizing Steel Production by Creating Markets for ‘Green Steel’ Products

Authors: Hasan Muslemani, Xi Liang, Kathi Kaesehage, Francisco Ascui, Jeffrey Wilson

Abstract:

The creation of a market for lower-carbon steel products, here called ‘green steel’, has been identified as an important means to support the introduction of breakthrough emission reduction technologies into the steel sector. However, the definition of what ‘green’ entails in the context of steel production, the implications on the competitiveness of green steel products in local and international markets, and the necessary market mechanisms to support their successful market penetration remain poorly explored. This paper addresses this gap by holding semi-structured interviews with international sustainability experts and commercial managers from leading steel trade associations, research institutes and steelmakers. Our findings show that there is an urgent need to establish a set of standards to define what ‘greenness’ means in the steelmaking context; standards that avoid market disruptions, unintended consequences, and opportunities for greenwashing. We also highlight that the introduction of green steel products will have implications on product competitiveness on three different levels: 1) between primary and secondary steelmaking routes, 2) with traditional, lesser green steel, and 3) with other substitutable materials (e.g. cement and plastics). This paper emphasises the need for steelmakers to adopt a transitional approach in deploying different low-carbon technologies, based on their stage of technological maturity, applicability in certain country contexts, capacity to reduce emissions over time, and the ability of the investment community to support their deployment. We further identify market mechanisms to support green steel production, including carbon border adjustments and public procurement, highlighting a need for implementing a combination of complementary policies to ensure the products’ roll-out. The study further shows that the auto industry is a likely candidate for green steel consumption, where a market would be supported by price premiums paid by willing consumers, such as those of high-end luxury vehicles.

Keywords: green steel, decarbonisation, business model innovation, market analysis

Procedia PDF Downloads 106
63 Calculation of the Supersonic Air Intake with the Optimization of the Shock Wave System

Authors: Elena Vinogradova, Aleksei Pleshakov, Aleksei Yakovlev

Abstract:

During the flight of a supersonic aircraft under various conditions (altitude, Mach, etc.), it becomes necessary to coordinate the operating modes of the air intake and engine. On the supersonic aircraft, it’s been done by changing various control factors (the angle of rotation of the wedge panels and etc.). This paper investigates the possibility of using modern optimization methods to determine the optimal position of the supersonic air intake wedge panels in order to maximize the total pressure recovery coefficient. Modern software allows us to conduct auto-optimization, which determines the optimal position of the control elements of the investigated product to achieve its maximum efficiency. In this work, the flow in the supersonic aircraft inlet has investigated and optimized the operation of the flaps of the supersonic inlet in an aircraft in a 2-D setting. This work has done using ANSYS CFX software. The supersonic aircraft inlet is a flat adjustable external compression inlet. The braking surface is made in the form of a three-stage wedge. The IOSO NM software package was chosen for optimization. Change in the position of the panels of the input device is carried out by changing the angle between the first and second steps of the three-stage wedge. The position of the rest of the panels is changed automatically. Within the framework of the presented work, the position of the moving air intake panel was optimized under fixed flight conditions of the aircraft under a certain engine operating mode. As a result of the numerical modeling, the distribution of total pressure losses was obtained for various cases of the engine operation, depending on the incoming flow velocity and the flight altitude of the aircraft. The results make it possible to obtain the maximum total pressure recovery coefficient under given conditions. Also, the initial geometry was set with a certain angle between the first and second wedge panels. Having performed all the calculations, as well as the subsequent optimization of the aircraft input device, it can be concluded that the initial angle was set sufficiently close to the optimal angle.

Keywords: optimal angle, optimization, supersonic air intake, total pressure recovery coefficient

Procedia PDF Downloads 205
62 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 348
61 A Decrease in the Anxiety Levels of Participants with Autoimmune Disease: Efficacy of a Community-Based Educational Program

Authors: Jennifer Hunter, Francisco Ramirez, Neil A. Nedley, Thania Solorio, Christian Freed, Erica Kinjo

Abstract:

People who have autoimmune disease are often at an increased risk for psychological disorders such as anxiety. Untreated psychological conditions can affect the development of disease and can affect one’s general quality of life. In this study, it was hypothesized that an educational community-based intervention would be useful in decreasing the anxiety levels of participants with autoimmune disease. Programs, 2-hours long each, were held weekly over a period of eight weeks. During every meeting, a 45-minute DVD presentation by a skilled physician was shown, a small group discussion was guided by trained facilitators, and weekly practical assignments were given to each participant. The focus of the program was to educate participants about healthy lifestyle behaviors such as exercise, nutrition, sleep hygiene, helpful thought patterns etc., and to provide a group environment in which each participant was supported. Participants were assessed pre-post program for anxiety using the Depression and Anxiety Assessment Test (registration TX 7-398-022), a validated mental health test based on DSM-5 criteria and demographics. Anxiety scores were classified according to the DSM-5 criteria into 4 categories: none (0-6), mild (7-10), moderate (11-19) or severe (20 or more). Out of the participants who participated in programs conducted in the manner explained above (n=431), the average age was 54.9 (SD 16.6) and 81.9% were female. At baseline, the mean group anxiety level was 9.4 (SD 5.4). Within the baseline group, anxiety levels were as follows: none (21.1%), mild (22.0%), moderate (27.1%) and severe (29.7%). After the program, mean group anxiety decreased to 4.7 (SD 4.0). Post-program anxiety levels were as follows: none (54.8%), mild (27.1%), moderate (12.5%), severe (5.6%). The decrease in overall anxiety levels was significant t(431)=19.3 p<.001, 95% CI [0.815, 1.041]. It was concluded that the eight-week intensive was beneficial in decreasing the anxiety levels of participants. A long-term follow-up study would be beneficial in determining how lasting such improvements are especially since autoimmune diseases are often chronic. Additionally, future studies that utilize a control group would aid in establishing whether the improvements seen are due to the use of this specific lifestyle-educational program.

Keywords: anxiety, auto-immune disease, community-based educational program, lifestyle

Procedia PDF Downloads 86