Search results for: partial pair distribution functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8672

Search results for: partial pair distribution functions

902 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 76
901 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 239
900 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 146
899 Research on Structural Changes in Plastic Deformation during Rolling and Crimping of Tubes

Authors: Hein Win Zaw

Abstract:

Today, the advanced strategies for aircraft production technology potentially need the higher performance, and on the other hand, those strategies and engineering technologies should meet considerable process and reduce of production costs. Thus, professionals who are working in these scopes are attempting to develop new materials to improve the manufacturability of designs, the creation of new technological processes, tools and equipment. This paper discusses about the research on structural changes in plastic deformation during rotary expansion and crimp of pipes. Pipelines are experiencing high pressure and pulsating load. That is why, it is high demands on the mechanical properties of the material, the quality of the external and internal surfaces, preserve cross-sectional shape and the minimum thickness of the pipe wall are taking into counts. In the manufacture of pipes, various operations: distribution, crimping, bending, etc. are used. The most widely used at various semi-products, connecting elements found the process of rotary expansion and crimp of pipes. In connection with the use of high strength materials and less-plastic, these conventional techniques do not allow obtaining high-quality parts, and also have a low economic efficiency. Therefore, research in this field is relevantly considerable to develop in advanced. Rotary expansion and crimp of pipes are accompanied by inhomogeneous plastic deformation, which leads to structural changes in the material, causes its deformation hardening, by this result changes the operational reliability of the product. Parts of the tube obtained by rotary expansion and crimp differ by multiplicity of form and characterized by various diameter in the various section, which formed in the result of inhomogeneous plastic deformation. The reliability of the coupling, obtained by rotary expansion and crimp, is determined by the structural arrangement of material formed by the formation process; there is maximum value of deformation, the excess of which is unacceptable. The structural state of material in this condition is determined by technological mode of formation in the rotary expansion and crimp. Considering the above, objective of the present study is to investigate the structural changes at different levels of plastic deformation, accompanying rotary expansion and crimp, and the analysis of stress concentrators of different scale levels, responsible for the formation of the primary zone of destruction.

Keywords: plastic deformation, rolling of tubes, crimping of tubes, structural changes

Procedia PDF Downloads 318
898 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 497
897 Determinants of Pupils' Performance in the National Achievement Test in Public Elementary Schools of Cavite City

Authors: Florenda B. Cardinoza

Abstract:

This study was conducted to determine the determinants of Grade III and grade VI pupils’ performance in the National Achievement Test in the Division of Cavite City, School Year 2011-2012. Specifically, the research aimed to: (1) describe the demographic profile of the respondents in terms of age, sex, birth order, family size, family income, and occupation of parents; (2) determine the level of attitude towards NAT; and (3) describe the degree of relationship between the following variables: school support, teachers’ support, and lastly family support for the pupils’ performance in 2012 NAT. The study used the descriptive-correlation research method to investigate the determinants of pupils’ performance in the National Achievement Test of Public Elementary Schools in the Division of Cavite City. The instrument used in data gathering was a self-structured survey. The NAT result for SY 2011-2012 provided by NETRC and DepEd Cavite City was also utilized. The statistical tools used to process and analyze the data were frequency distribution, percentage, mean, standard deviation, Kruskall Wallis, Mann-Whitney, t-test for independent samples, One-way ANOVA, and Spearman Rank Correlational Coefficient. Results revealed that there were more female students than males in the Division of Cavite City; out of 659 respondents, 345 were 11 years old and above; 390 were females; 283 were categorized as first child in the family; 371 of the respondents were from small family; 327 had Php5000 and below family income; 450 of the fathers’ respondents were non professionals; and 431 of the mothers respondents had no occupation. The attitude towards NAT, with a mean of 1.65 and SD of .485, shows that respondents considered NAT important. The school support towards NAT, with a mean of 1.89 and SD of .520, shows that respondents received school support. The pupils had a very high attitude towards teachers’ support in NAT with a mean of 1.60 and SD of .572. Family support, with t-test of 16.201 with a p-value of 0.006, shows significant at 5 percent level. Thus, the determinants of pupils’ performance in NAT in terms of family support for NAT preparation is not significant according to their family income. The grade level, with the t-test is 4.420 and a p-value of 0.000, is significant at 5 percent level. Therefore, the determinants of pupils’ performance in NAT in terms of grade level for NAT preparation vary according to their grade level. For the determinants of pupils’ performance of NAT sample test for attitude towards NAT, school support, teachers’ support, and family support were noted highly significant with a p value of 0.000.

Keywords: achievement, determinants, national, performance, public, pupils', test

Procedia PDF Downloads 334
896 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home

Authors: Laura M. F. Bertens

Abstract:

The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.

Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace

Procedia PDF Downloads 146
895 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 61
894 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 96
893 Prenatal Can Reduce the Burden of Preterm Birth and Low Birthweight from Maternal Sexually Transmitted Infections: US National Data

Authors: Anthony J. Kondracki, Bonzo I. Reddick, Jennifer L. Barkin

Abstract:

We sought to examine the association of maternal Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and treponema pallidum (TP) (syphilis) infections with preterm birth (PTB) (<37 weeks gestation), low birth weight (LBW) (<2500 grams) and prenatal care (PNC) attendance. This cross-sectional study was based on data drawn from the 2020 United States National Center for Health Statistics (NCHS) Natality File. We estimated the prevalence of all births, early/late PTBs, moderately/very LBW, and the distribution of sexually transmitted infections (STIs) according to maternal characteristics in the sample. In multivariable logistic regression models, we examined adjusted odds ratios (aORs) and their corresponding 95% confidence intervals (CIs) of PTB and LBW subcategories in the association with maternal/infant characteristics, PNC status, and maternal CT, NG, and TP infections. In separate logistic regression models, we assessed the risk of these newborn outcomes stratified by PNC status. Adjustments were made for race/ethnicity, age, education, marital status, health insurance, liveborn parity, previous preterm birth, gestational hypertension, gestational diabetes, PNC status, smoking, and infant sex. Additionally, in a sensitivity analysis, we assessed the association with early, full, and late term births and the potential impact of unmeasured confounding using the E-value. CT (1.8%) was most prevalent STI in pregnancy, followed by NG (0.3%), and TP (0.1%). Non-Hispanic Black women, 20-24 years old, with a high school education, and on Medicaid had the highest rate of STIs. Around 96.6% of women reported receiving PNC and about 60.0% initiated PNC early in pregnancy. PTB and LBW were strongly associated with NG infection (12.2% and 12.1%, respectively) and late initiation/no PNC (8.5% and 7.6%, respectively), and ≤10 prenatal visits received (13.1% and 10.3%, respectively). The odds of PTB and LBW were 2.5- to 3-foldhigher for each STI among women who received ≤10 prenatal visits than >10 visits. Adequate prenatal care utilization and timely screening and treatment of maternal STIs can substantially reduce the burden of adverse newborn outcomes.

Keywords: low birthweight, prenatal care, preterm birth, sexually transmitted infections

Procedia PDF Downloads 161
892 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 244
891 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon

Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi

Abstract:

The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.

Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation

Procedia PDF Downloads 146
890 Developing Geriatric Oral Health Network is a Public Health Necessity for Older Adults

Authors: Maryam Tabrizi, Shahrzad Aarup

Abstract:

Objectives- Understanding the close association between oral health and overall health for older adults at the right time and right place, a person, focus treatment through Project ECHO telementoring. Methodology- Data from monthly ECHO telementoring sessions were provided for three years. Sessions including case presentations, overall health conditions, considering medications, organ functions limitations, including the level of cognition. Contributions- Providing the specialist level of providing care to all elderly regardless of their location and other health conditions and decreasing oral health inequity by increasing workforce via Project ECHO telementoring program worldwide. By 2030, the number of adults in the USA over the age of 65 will increase more than 60% (approx.46 million) and over 22 million (30%) of 74 million older Americans will need specialized geriatrician care. In 2025, a national shortage of medical geriatricians will be close to 27,000. Most individuals 65 and older do not receive oral health care due to lack of access, availability, or affordability. One of the main reasons is a significant shortage of Oral Health (OH) education and resources for the elderly, particularly in rural areas. Poor OH is a social stigma, a thread to quality and safety of overall health of the elderly with physical and cognitive decline. Poor OH conditions may be costly and sometimes life-threatening. Non-traumatic dental-related emergency department use in Texas alone was over $250 M in 2016. Most elderly over the age of 65 present with at least one or multiple chronic diseases such as arthritis, diabetes, heart diseases, and chronic obstructive pulmonary disease (COPD) are at higher risk to develop gum (periodontal) disease, yet they are less likely to get dental care. In addition, most older adults take both prescription and over-the-counter drugs; according to scientific studies, many of these medications cause dry mouth. Reduced saliva flow due to aging and medications may increase the risk of cavities and other oral conditions. Most dental schools have already increased geriatrics OH in their educational curriculums, but the aging population growth worldwide is faster than growing geriatrics dentists. However, without the use of advanced technology and creating a network between specialists and primary care providers, it is impossible to increase the workforce, provide equitable oral health to the elderly. Project ECHO is a guided practice model that revolutionizes health education and increases the workforce to provide best-practice specialty care and reduce health disparities. Training oral health providers for utilizing the Project ECHO model is a logical response to the shortage and increases oral health access to the elderly. Project ECHO trains general dentists & hygienists to provide specialty care services. This means more elderly can get the care they need, in the right place, at the right time, with better treatment outcomes and reduces costs.

Keywords: geriatric, oral health, project echo, chronic disease, oral health

Procedia PDF Downloads 159
889 Analysis of Co2 Emission from Thailand's Thermal Power Sector by Divisia Decomposition Approach

Authors: Isara Muangthai, Lin Sue Jane

Abstract:

Electricity is vital to every country’s economy in the world. For Thailand, the electricity generation sector plays an important role in the economic system, and it is the largest source of CO2 emissions. The aim of this paper is to use the decomposition analysis to investigate the key factors contributing to the changes of CO2 emissions from the electricity sector. The decomposition analysis has been widely used to identify and assess the contributors to the changes in emission trends. Our study adopted the Divisia index decomposition to identify the key factors affecting the evolution of CO2 emissions from Thailand’s thermal power sector during 2000-2011. The change of CO2 emissions were decomposed into five factors, including: Emission coefficient, heat rate, fuel intensity, electricity intensity, and economic growth. Results have shown that CO2 emission in Thailand’s thermal power sector increased 29,173 thousand tons during 2000-2011. Economic growth was found to be the primary factor for increasing CO2 emissions, while the electricity intensity played a dominant role in decreasing CO2 emissions. The increasing effect of economic growth was up to 55,924 million tons of CO2 emissions because the growth and development of the economy relied on a large electricity supply. On the other hand, the shifting of fuel structure towards a lower-carbon content resulted in CO2 emission decline. Since the CO2 emissions released from Thailand’s electricity generation are rapidly increasing, the Thailand government will be required to implement a CO2 reduction plan in the future. In order to cope with the impact of CO2 emissions related to the power sector and to achieve sustainable development, this study suggests that Thailand’s government should focus on restructuring the fuel supply in power generation towards low carbon fuels by promoting the use of renewable energy for electricity, improving the efficiency of electricity use by reducing electricity transmission and the distribution of line losses, implementing energy conservation strategies by enhancing the purchase of energy-saving products, substituting the new power plant technology in the old power plants, promoting a shift of economic structure towards less energy-intensive services and orienting Thailand’s power industry towards low carbon electricity generation.

Keywords: co2 emission, decomposition analysis, electricity generation, energy consumption

Procedia PDF Downloads 463
888 Enhancing Wayfinding and User Experience in Hospital Environments: A Study of University Medical Centre Ljubljana

Authors: Nastja Utrosa, Matevz Juvancic

Abstract:

Hospital buildings are complex public environments characterized by intricate functional arrangements and architectural layouts. Effective wayfinding is essential for patients, visitors, students, and staff. However, spatial orientation planning is often overlooked until after construction. While these environments meet functional needs, they frequently neglect the psychological aspects of user experience. This study investigates wayfinding within complex urban healthcare environments, focusing on the influences of spatial design, spatial cognition, and user experience. The inherent complexity of these environments, with extensive spatial dimensions and dispersed buildings, exacerbates the problem. Gradual expansions and additions contribute to disorientation and navigational difficulties for users. Effective route guidance in urban healthcare settings has become increasingly crucial. However, research on the environmental elements that influence wayfinding in such environments remains limited. To address this gap, we conducted a study at the University Medical Centre Ljubljana (UMCL), Slovenia's largest university hospital. Using a questionnaire, we assessed how individuals' perceptions and use of outdoor hospital spaces with a diverse sample (n=179). We evaluated the area’s usability by analyzing visit frequency, stops, modes of arrival, and parking patterns and examined the visitors' age distribution. Additionally, we investigated spatial aids and the use of color as an orientation element at three specific locations within the medical center. Our study explored the impact of color on entrance selection and the effectiveness of warm versus cool colors for wayfinding. Our findings highlight the significance of graphic adjustments in shaping perceptions of hospital outdoor spaces. Most participants preferred visually organized entrances, underscoring the importance of effective visual communication. Implementing these adaptations can substantially enhance the user experience, reducing stress and increasing satisfaction in hospital environments.

Keywords: hospital layout design, healthcare facilities, wayfinding, navigational aids, spatial orientation, color, signage

Procedia PDF Downloads 19
887 Epidemiology of Hepatitis B and Hepatitis C Viruses Among Pregnant Women at Queen Elizabeth Central Hospital, Malawi

Authors: Charles Bijjah Nkhata, Memory Nekati Mvula, Milton Masautso Kalongonda, Martha Masamba, Isaac Thom Shawa

Abstract:

Viral Hepatitis is a serious public health concern globally with deaths estimated at 1.4 million annually due to liver fibrosis, cirrhosis, and hepatocellular carcinoma. Hepatitis B and C are the most common viruses that cause liver damage. However, the majority of infected individuals are unaware of their serostatus. Viral Hepatitis has contributed to maternal and neonatal morbidity and mortality. There is no updated data on the Epidemiology of hepatitis B and C among pregnant mothers in Malawi. To assess the epidemiology of Hepatitis B and C viruses among pregnant women at Queen Elizabeth Central Hospital. Specific Objectives • To determine sero-prevalence of HBsAg and Anti-HCV in pregnant women at QECH. • To investigate risk factors associated with HBV and HCV infection in pregnant women. • To determine the distribution of HBsAg and Anti-HCV infection among pregnant women of different age group. A descriptive cross-sectional study was conducted among pregnant women at QECH in last quarter of 2021. Of the 114 pregnant women, 96 participants were consented and enrolled using a convenient sampling technique. 12 participants were dropped due to various reasons; therefore 84 completed the study. A semi-structured questionnaire was used to collect socio-demographic and behavior characteristics to assess the risk of exposure. Serum was processed from venous blood samples and tested for HBsAg and Anti-HCV markers utilizing Rapid screening assays for screening and Enzyme Linked Immunosorbent Assay for confirmatory. A total of 84 pregnant consenting pregnant women participated in the study, with 1.2% (n=1/84) testing positive for HBsAg and nobody had detectable anti-HCV antibodies. There was no significant link between HBV and HCV in any of the socio-demographic data or putative risk variables. The findings indicate a viral hepatitis prevalence lower than the set range by the WHO. This suggests that HBV and HCV are rare in pregnant women at QECH. Nevertheless, accessible screening for all pregnant women should be provided. The prevention of MTCT is key for reduction and prevention of the global burden of chronic viral Hepatitis.

Keywords: viral hepatitis, hepatitis B, hepatitis C, pregnancy, malawi, liver disease, mother to child transmission

Procedia PDF Downloads 151
886 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 166
885 Effects of Subsidy Reform on Consumption and Income Inequalities in Iran

Authors: Pouneh Soleimaninejadian, Chengyu Yang

Abstract:

In this paper, we use data on Household Income and Expenditure survey of Statistics Centre of Iran, conducted from 2005-2014, to calculate several inequality measures and to estimate the effects of Iran’s targeted subsidy reform act on consumption and income inequality. We first calculate Gini coefficients for income and consumption in order to study the relation between the two and also the effects of subsidy reform. Results show that consumption inequality has not been always mirroring changes in income inequality. However, both Gini coefficients indicate that subsidy reform caused improvement in inequality. Then we calculate Generalized Entropy Index based on consumption and income for years before and after the Subsidy Reform Act of 2010 in order to have a closer look into the changes in internal structure of inequality after subsidy reforms. We find that the improvement in income inequality is mostly caused by the decrease in inequality of lower income individuals. At the same time consumption inequality has been decreased as a result of more equal consumption in both lower and higher income groups. Moreover, the increase in Engle coefficient after the subsidy reform shows that a bigger portion of income is allocated to consumption on food which is a sign of lower living standard in general. This increase in Engle coefficient is due to rise in inflation rate and relative increase in price of food which partially is another consequence of subsidy reform. We have conducted some experiments on effect of subsidy payments and possible effects of change on distribution pattern and amount of cash subsidy payments on income inequality. Result of the effect of cash payments on income inequality shows that it leads to a definite decrease in income inequality and had a bigger share in improvement of rural areas compared to those of urban households. We also examine the possible effect of constant payments on the increasing income inequality for years after 2011. We conclude that reduction in value of payments as a result of inflation plays an important role regardless of the fact that there may be other reasons. We finally experiment with alternative allocations of transfers while keeping the total amount of cash transfers constant or make it smaller through eliminating three higher deciles from the cash payment program, the result shows that income equality would be improved significantly.

Keywords: consumption inequality, generalized entropy index, income inequality, Irans subsidy reform

Procedia PDF Downloads 214
884 Contribution to the Hydrogeochemical Investigations on the Wajid Aquifer System, Southwestern Part of Saudi Arabia

Authors: Mohamed Ahmed, Ezat Korany, Abdelaziz Al Basam, Osama Kasem

Abstract:

The arid climate, low rate of precipitations and population reflect the increasing of groundwater uses as the main source of water in Saudi Arabia. The Wajid Aquifer System represents a regional groundwater aquifer system along the edge of the crystalline Arabian Shield near the southwestern tip of the Arabian Peninsula. The aquifer extends across the border of Saudi Arabia and Yemen from the Asir –Yemen Highlands to the Rub al Khali Depression and possibly to the Gulf coast (at the southwestern tip). The present work is representing a hydrogeochemical investigation on the Wajid Aquifer System. The studied area is being classified into three zones. The 1st zone is West of Wadi Ad Dawasir (Northern part of the studied area), the 2nd is Najran-Asir Zone (southern part of the studied area), and the 3rd zone is the intermediate -central zone (occupying the central area between the last two zones). The groundwater samples were collected and chemically analyzed for physicochemical properties such as pH, electrical conductivity, total hardness (TH), alkalinity (pH), total dissolved solids (TDS), major ions (Ca2+, Mg2+, Na+, K+, HCO3-, SO42- and Cl-), and trace elements. Some parameters such as sodium adsorption ratio (SAR), soluble sodium percentage (Na%), potential salinity, residual sodium carbonate, Kelly's ratio, permeability index and Gibbs ratio, hydrochemical coefficients, hydrochemical formula, ion dominance, salt combinations and water types were also calculated in order to evaluate the quality of the groundwater resources in the selected areas for different purposes. The distribution of the chemical constituents and their interrelationships are illustrated by different hydrochemical graphs. Groundwater depths and the depth to water were measured to study the effect of discharge on both the water level and the salinity of the studied groundwater wells. A detailed comparison between the three studied zones according to the variations shown by the chemical and field investigations are discussed in detailed within the work.

Keywords: Najran-Asir, Wadi Ad Dawasir, Wajid Aquifer System, effect of discharge

Procedia PDF Downloads 112
883 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?

Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria

Abstract:

Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.

Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state

Procedia PDF Downloads 287
882 A Preliminary Research on Constituted Rules of Settlement Housing Alterations of Chinese New Village in Malaysia: A Study of Ampang New Village, Selangor

Authors: Song Hung Chi, Lee Chun Benn

Abstract:

Follow by the “A Research on Types of Settlement Housing Alterations of Chinese New Village in Malaysia- A Study in Ampang New Village, Selangor” preliminary informed that the main factors for expansion and enlargement suitably due to the needs of user's life and restoration purpose. The alterations behavior generally derived at the rear position of main house with different types of derivatives, the averages expansion area are not exceeding of 100㎡, while building materials used were wooden, wooden structure, and zinc which are non-permanent building materials. Therefore, a subsequent studies taken in this paper, further to analyze the drawing with summarize method, to explore the derived forms and the constituted rules of housing alterations in Ampang Village, as a more complete presentation of housing alterations in New Village. Firstly, classified the existing housing alterations into three types by using summarize method, which are Type 1, Additional of Prototype House; Type 2, Expansion of Prototype House; and Type 3, Diffusion of Additional. The results shows that the derivative mode of alterations can be divided into the use of "continuous wall" or "non-continuous wall," this will affects the structural systems and roof styles of alterations, and formed the different layers of interior space with "stages" and "continuity". On the aspects of spatial distribution, sacrificial area as a prescriptive function of space, it was mostly remains in the original location which in the center of living area after alterations. It is an important characteristic in a New Village house, reflecting the traditional Ethics of Hakka Chinese communities in the settlement. In addition, wooden as the main building materials of constituted rules for the prototype house, although there were appeared other building materials, such as cement, brick, glass, metal and zinc after alterations, but still mostly as "wooden house" pattern. Result show because of the economy of village does not significantly improve, and also forming the similarity types in alterations and constructions of the additional building with the existing. It did not significantly improve on the quality of living, but only increased the area of usage space.

Keywords: Ampang new village, derived forms, constituted rules, alterations

Procedia PDF Downloads 309
881 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 101
880 Behavioral Mapping and Post-Occupancy Evaluation of Meeting-Point Design in an International Airport

Authors: Meng-Cong Zheng, Yu-Sheng Chen

Abstract:

The meeting behavior is a pervasive kind of interaction, which often occurs between the passenger and the shuttle. However, the meeting point set up at the Taoyuan International Airport is too far from the entry-exit, often causing passengers to stop searching near the entry-exit. When the number of people waiting for the rush hour increases, it often results in chaos in the waiting area. This study tried to find out what is the key factor to promote the rapid finding of each other between the passengers and the pick-ups. Then we implemented several design proposals to improve the meeting behavior of passengers and pick-ups based on behavior mapping and post-occupancy evaluation to enhance their meeting efficiency in unfamiliar environments. The research base is the reception hall of the second terminal of Taoyuan International Airport. Behavioral observation and mapping are implemented on the entry of inbound passengers into the welcome space, including the crowd distribution of the people who rely on the separation wall in the waiting area, the behavior of meeting and the interaction between the inbound passengers and the pick-ups. Then we redesign the space planning and signage design based on post-occupancy evaluation to verify the effectiveness of space plan and signage design. This study found that passengers ignore existing meeting-point designs which are placed on distant pillars at both ends. The position of the screen affects the area where the receiver is stranded, causing the pick-ups to block the passenger's moving line. The pick-ups prefer to wait where it is easy to watch incoming passengers and where it is closest to the mode of transport they take when leaving. Large visitors tend to gather next to landmarks, and smaller groups have a wide waiting area in the lobby. The location of the meeting point chosen by the pick-ups is related to the view of the incoming passenger. Finally, this study proposes an improved design of the meeting point, setting the traffic information in it, so that most passengers can see the traffic information when they enter the country. At the same time, we also redesigned the pick-ups desk to improve the efficiency of passenger meeting.

Keywords: meeting point design, post-occupancy evaluation, behavioral mapping, international airport

Procedia PDF Downloads 121
879 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: landsat 8 (OLI-TIRS), LST, LSWI, LULC, NDVI, rice

Procedia PDF Downloads 209
878 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 36
877 The Impact of COVID-19 Waste on Aquatic Organisms: Nano/microplastics and Molnupiravir in Salmo trutta Embryos and Lervae

Authors: Živilė Jurgelėnė, Vitalijus Karabanovas, Augustas Morkvėnas, Reda Dzingelevičienė, Nerijus Dzingelevičius, Saulius Raugelė, Boguslaw Buszewski

Abstract:

The short- and long-term effects of COVID-19 antiviral drug molnupiravir and micro/nanoplastics on the early development of Salmo trutta were investigated using accumulation and exposure studies. Salmo trutta were used as standardized test organisms in toxicity studies of COVID-19 waste contaminants. The 2D/3D imaging was performed using confocal fluorescence spectral imaging microscopy to assess the uptake, bioaccumulation, and distribution of molnupiravir and micro/nanoplastics complex in live fish. Our study results demonstrated that molnupiravir may interact with a micro/nanoplastics and modify their spectroscopic parameters and toxicity to S. trutta embryos and larvae. The 0.2 µm size microplastics at a concentration of 10 mg/L were found to be stable in aqueous media than 0.02 µm, and 2 µm sizes polymeric particles. This study demonstrated that polymeric particles can adsorb molnupiravir that are present in mixtures and modify the accumulation of molnupiravir in Salmo trutta embryos and larvae. In addition, 2D/3D confocal fluorescence imaging showed that the single polymeric particle hardly accumulates and couldn't penetrate outer tissues of the tested organism. However, co-exposure micro/nanoplastics and molnupiravir could significantly enhance the polymeric particles capability of accumulating on surface tissues and penetrating surface tissue of fish in early development. Exposure to molnupiravir at 2 g/L concentration and co-exposure to micro/nanoplastics and molnupiravir did not bring about survival changes in in the early stages of Salmo trutta development, but we observed the reduction in heart rate and decrease in gill ventilation. The statistical analysis confirmed that micro/nanoplastics used in combination with molnupiravir enhance the toxicity of the latter micro/nanoplastics to embryos and larvae. This research has received funding from the European Regional Development Fund (project No 13.1.1-LMT-K-718-05-0014) under a grant agreement with the Research Council of Lithuania (LMTLT), and it was funded as part of the European Union’s measure in response to the COVID-19 pandemic.

Keywords: fish, micro/nanoplastics, molnupiravir, toxicity

Procedia PDF Downloads 74
876 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 210
875 The Rational Mode of Affordable Housing Based on the Special Residence Space Form of City Village in Xiamen

Authors: Pingrong Liao

Abstract:

Currently, as China is in the stage of rapid urbanization, a large number of rural population have flown into the city and it is urgent to solve the housing problem. Xiamen is the typical city of China characterized by high housing price and low-income. Due to the government failed to provide adequate public cheap housing, a large number of immigrants dwell in the informal rental housing represented by the "city village". Comfortable housing is the prerequisite for the harmony and stability of the city. Therefore, with "city village" and the affordable housing as the main object of study, this paper makes an analysis on the housing status, personnel distribution and mobility of the "city village" of Xiamen, and also carries out a primary research on basic facilities such as the residential form and commercial, property management services, with the combination of the existing status of the affordable housing in Xiamen, and finally summary and comparison are made by the author in an attempt to provide some references and experience for the construction and improvement of the government-subsidized housing to improve the residential quality of the urban-poverty stricken people. In this paper, the data and results are collated and quantified objectively based on the relevant literature, the latest market data and practical investigation as well as research methods of comparative study and case analysis. Informal rental housing, informal economy and informal management of "city village" as social-housing units in many ways fit in the housing needs of the floating population, providing a convenient and efficient condition for the flowing of people. However, the existing urban housing in Xiamen have some drawbacks, for example, the housing are unevenly distributed, the spatial form is single, the allocation standard of public service facilities is not targeted to the subsidized object, the property management system is imperfect and the cost is too high, therefore, this paper draws lessons from the informal model of city village”, and finally puts forward some improvement strategies.

Keywords: urban problem, urban village, affordable housing, living mode, Xiamen constructing

Procedia PDF Downloads 230
874 Calculation of the Thermal Stresses in an Elastoplastic Plate Heated by Local Heat Source

Authors: M. Khaing, A. V. Tkacheva

Abstract:

The work is devoted to solving the problem of temperature stresses, caused by the heating point of the round plate. The plate is made of elastoplastic material, so the Prandtl-Reis model is used. A piecewise-linear condition of the Ishlinsky-Ivlev flow is taken as the loading surface, in which the yield stress depends on the temperature. Piecewise-linear conditions (Treska or Ishlinsky-Ivlev), in contrast to the Mises condition, make it possible to obtain solutions of the equilibrium equation in an analytical form. In the problem under consideration, using the conditions of Tresca, it is impossible to obtain a solution. This is due to the fact that the equation of equilibrium ceases to be satisfied when the two Tresca conditions are fulfilled at once. Using the conditions of plastic flow Ishlinsky-Ivlev allows one to solve the problem. At the same time, there are also no solutions on the edge of the Ishlinsky-Ivlev hexagon in the plane-stressed state. Therefore, the authors of the article propose to jump from the edge to the edge of the mine edge, which gives an opportunity to obtain an analytical solution. At the same time, there is also no solution on the edge of the Ishlinsky-Ivlev hexagon in a plane stressed state; therefore, in this paper, the authors of the article propose to jump from the side to the side of the mine edge, which gives an opportunity to receive an analytical solution. The paper compares solutions of the problem of plate thermal deformation. One of the solutions was obtained under the condition that the elastic moduli (Young's modulus, Poisson's ratio) which depend on temperature. The yield point is assumed to be parabolically temperature dependent. The main results of the comparisons are that the region of irreversible deformation is larger in the calculations obtained for solving the problem with constant elastic moduli. There is no repeated plastic flow in the solution of the problem with elastic moduli depending on temperature. The absolute value of the irreversible deformations is higher for the solution of the problem in which the elastic moduli are constant; there are also insignificant differences in the distribution of the residual stresses.

Keywords: temperature stresses, elasticity, plasticity, Ishlinsky-Ivlev condition, plate, annular heating, elastic moduli

Procedia PDF Downloads 131
873 Numerical Analysis of CO₂ Storage as Clathrates in Depleted Natural Gas Hydrate Formation

Authors: Sheraz Ahmad, Li Yiming, Li XiangFang, Xia Wei, Zeen Chen

Abstract:

Holding CO₂ at massive scale in the enclathrated solid matter called hydrate can be perceived as one of the most reliable methods for CO₂ sequestration to take greenhouse gases emission control measures and global warming preventive actions. In this study, a dynamically coupled mass and heat transfer mathematical model is developed which elaborates the unsteady behavior of CO₂ flowing into a porous medium and converting itself into hydrates. The combined numerical model solution by implicit finite difference method is explained and through coupling the mass, momentum and heat conservation relations, an integrated model can be established to analyze the CO₂ hydrate growth within P-T equilibrium conditions. CO₂ phase transition, effect of hydrate nucleation by exothermic heat release and variations of thermo-physical properties has been studied during hydrate nucleation. The results illustrate that formation pressure distribution becomes stable at the early stage of hydrate nucleation process and always remains stable afterward, but formation temperature is unable to keep stable and varies during CO₂ injection and hydrate nucleation process. Initially, the temperature drops due to cold high-pressure CO₂ injection since when the massive hydrate growth triggers and temperature increases under the influence of exothermic heat evolution. Intermittently, it surpasses the initial formation temperature before CO₂ injection initiates. The hydrate growth rate increases by increasing injection pressure in the long formation and it also expands overall hydrate covered length in the same induction period. The results also show that the injection pressure conditions and hydrate growth rate affect other parameters like CO₂ velocity, CO₂ permeability, CO₂ density, CO₂ and H₂O saturation inside the porous medium. In order to enhance the hydrate growth rate and expand hydrate covered length, the injection temperature is reduced, but it did not give satisfactory outcomes. Hence, CO₂ injection in vacated natural gas hydrate porous sediment may form hydrate under low temperature and high-pressure conditions, but it seems very challenging on a huge scale in lengthy formations.

Keywords: CO₂ hydrates, CO₂ injection, CO₂ Phase transition, CO₂ sequestration

Procedia PDF Downloads 119