Search results for: correlated parallel machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3111

Search results for: correlated parallel machines

501 Understanding the Accumulation of Microplastics in Riverbeds and Soils

Authors: Gopala Krishna Darbha

Abstract:

Microplastics (MPs) are secondary fragments of large-sized plastic debris released into the environment and fall in the size range of less than 5 mm. Though reports indicate the abundance of MPs in both riverine and soil environments, their fate is still not completely understood due to the complexity of natural conditions. Mineral particles are ubiquitous in the rivers and may play a vital role in accumulating MPs to the riverbed, thus affecting the benthic life and posing a threat to the river's health. Apart, the chemistry (pH, ionic strength, humics) at the interface can be very prominent. The MPs can also act as potential vectors to transport other contaminants in the environment causing secondary water pollution. The present study focuses on understanding the interaction of MPs with weathering sequence of minerals (feldspar, kaolinite and gibbsite) under batch mode under relevant environmental and natural conditions. Simultaneously, we performed stability studies and transport (column) experiments to understand the mobility of MPs under varying soil solutions (SS) chemistry and the influence of contaminants (CuO nanoparticles). Results showed that the charge and morphology of the gibbsite played an significant role in sorption of NPs (108.1 mg/g) compared to feldspar (7.7 mg/g) and kaolinite (11.9 mg/g). The Fourier transform infrared spectroscopy data supports the complexation of NPs with gibbsite particles via hydrogen bonding. In case of feldspar and kaolinite, a weak interaction with NPs was observed which can be due to electrostatic repulsions and low surface area to volume ration of the mineral particles. The study highlights the enhanced mobility in presence of feldspar and kaolinite while gibbsite rich zones can cause entrapment of NPs accumulating in the riverbeds. In the case of soils, in the absence of MPs, a very high aggregation of CuO NPs observed in SS extracted from black, lateritic, and red soils, which can be correlated with ionic strength (IS) and type of ionic species. The sedimentation rate (Ksed(1/h)) for CuO NPs was >0.5 h−1 in the case of these SS. Interestingly, the stability and sedimentation behavior of CuO NPs varied significantly in the presence of MPs. The Ksed for CuO NPs decreased to half and found <0.25 h−1 in the presence of MPs in all SS. C/C0 values in breakthrough curves increased drastically (black < alluvial < laterite < red) in the presence of MPs. Results suggest that the release of MPs in the terrestrial ecosystem is a potential threat leading to increased mobility of metal nanoparticles in the environment.

Keywords: microplastics, minerals, sorption, soils

Procedia PDF Downloads 76
500 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 85
499 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups

Authors: Sakshi Bhalla

Abstract:

On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.

Keywords: communication, emoji, language, Twitter

Procedia PDF Downloads 79
498 Storm-Runoff Simulation Approaches for External Natural Catchments of Urban Sewer Systems

Authors: Joachim F. Sartor

Abstract:

According to German guidelines, external natural catchments are greater sub-catchments without significant portions of impervious areas, which possess a surface drainage system and empty in a sewer network. Basically, such catchments should be disconnected from sewer networks, particularly from combined systems. If this is not possible due to local conditions, their flow hydrographs have to be considered at the design of sewer systems, because the impact may be significant. Since there is a lack of sufficient measurements of storm-runoff events for such catchments and hence verified simulation methods to analyze their design flows, German standards give only general advices and demands special considerations in such cases. Compared to urban sub-catchments, external natural catchments exhibit greatly different flow characteristics. With increasing area size their hydrological behavior approximates that of rural catchments, e.g. sub-surface flow may prevail and lag times are comparable long. There are few observed peak flow values and simple (mostly empirical) approaches that are offered by literature for Central Europe. Most of them are at least helpful to crosscheck results that are achieved by simulation lacking calibration. Using storm-runoff data from five monitored rural watersheds in the west of Germany with catchment areas between 0.33 and 1.07 km2 , the author investigated by multiple event simulation three different approaches to determine the rainfall excess. These are the modified SCS variable run-off coefficient methods by Lutz and Zaiß as well as the soil moisture model by Ostrowski. Selection criteria for storm events from continuous precipitation data were taken from recommendations of M 165 and the runoff concentration method (parallel cascades of linear reservoirs) from a DWA working report to which the author had contributed. In general, the two run-off coefficient methods showed results that are of sufficient accuracy for most practical purposes. The soil moisture model showed no significant better results, at least not to such a degree that it would justify the additional data collection that its parameter determination requires. Particularly typical convective summer events after long dry periods, that are often decisive for sewer networks (not so much for rivers), showed discrepancies between simulated and measured flow hydrographs.

Keywords: external natural catchments, sewer network design, storm-runoff modelling, urban drainage

Procedia PDF Downloads 131
497 Establishing a Sustainable Construction Industry: Review of Barriers That Inhibit Adoption of Lean Construction in Lesotho

Authors: Tsepiso Mofolo, Luna Bergh

Abstract:

The Lesotho construction industry fails to embrace environmental practices, which has then lead to excessive consumption of resources, land degradation, air and water pollution, loss of habitats, and high energy usage. The industry is highly inefficient, and this undermines its capability to yield the optimum contribution to social, economic and environmental developments. Sustainable construction is, therefore, imperative to ensure the cultivation of benefits from all these intrinsic themes of sustainable development. The development of a sustainable construction industry requires a holistic approach that takes into consideration the interaction between Lean Construction principles, socio-economic and environmental policies, technological advancement and the principles of construction or project management. Sustainable construction is a cutting-edge phenomenon, forming a component of a subjectively defined concept called sustainable development. Sustainable development can be defined in terms of attitudes and judgments to assist in ensuring long-term environmental, social and economic growth in society. The key concept of sustainable construction is Lean Construction. Lean Construction emanates from the principles of the Toyota Production System (TPS), namely the application and adaptation of the fundamental concepts and principles that focus on waste reduction, the increase in value to the customer, and continuous improvement. The focus is on the reduction of socio-economic waste, and protestation of environmental degradation by reducing carbon dioxide emission footprint. Lean principles require a fundamental change in the behaviour and attitudes of the parties involved in order to overcome barriers to cooperation. Prevalent barriers to adoption of Lean Construction in Lesotho are mainly structural - such as unavailability of financing, corruption, operational inefficiency or wastage, lack of skills and training and inefficient construction legislation and political interferences. The consequential effects of these problems trigger down to quality, cost and time of the project - which then result in an escalation of operational costs due to the cost of rework or material wastage. Factor and correlation analysis of these barriers indicate that they are highly correlated, which then poses a detrimental potential to the country’s welfare, environment and construction safety. It is, therefore, critical for Lesotho’s construction industry to develop a robust governance through bureaucracy reforms and stringent law enforcement.

Keywords: construction industry, sustainable development, sustainable construction industry, lean construction, barriers to sustainable construction

Procedia PDF Downloads 264
496 Pathway Linking Early Use of Electronic Device and Psychosocial Wellbeing in Early Childhood

Authors: Rosa S. Wong, Keith T.S. Tung, Winnie W. Y. Tso, King-Wa Fu, Nirmala Rao, Patrick Ip

Abstract:

Electronic devices have become an essential part of our lives. Various reports have highlighted the alarming usage of electronic devices at early ages and its long-term developmental consequences. More sedentary screen time was associated with increased adiposity, worse cognitive and motor development, and psychosocial health. Apart from the problems caused by children’s own screen time, parents today are often paying less attention to their children due to hand-held device. Some anecdotes suggest that distracted parenting has negative impact on parent-child relationship. This study examined whether distracted parenting detrimentally affected parent-child activities which may, in turn, impair children’s psychosocial health. In 2018/19, we recruited a cohort of preschoolers from 32 local kindergartens in Tin Shui Wai and Sham Shui Po for a 5-year programme aiming to build stronger foundations for children from disadvantaged backgrounds through an integrated support model involving medical, education and social service sectors. A comprehensive set of questionnaires were used to survey parents on their frequency of being distracted while parenting and their frequency of learning and recreational activities with children. Furthermore, they were asked to report children’s screen time amount and their psychosocial problems. Mediation analyses were performed to test the direct and indirect effects of electronic device-distracted parenting on children’s psychosocial problems. This study recruited 873 children (448 females and 425 males, average age: 3.42±0.35). Longer screen time was associated with more psychosocial difficulties (Adjusted B=0.37, 95%CI: 0.12 to 0.62, p=0.004). Children’s screen time positively correlated with electronic device-distracted parenting (r=0.369, p < 01). We also found that electronic device-distracted parenting was associated with more hyperactive/inattentive problems (Adjusted B=0.66, p < 0.01), fewer prosocial behavior (Adjusted B=-0.74, p < 0.01), and more emotional symptoms (Adjusted B=0.61, p < 0.001) in children. Further analyses showed that electronic device-distracted parenting exerted influences both directly and indirectly through parent-child interactions but to different extent depending upon the outcome under investigation (38.8% for hyperactivity/inattention, 31.3% for prosocial behavior, and 15.6% for emotional symptoms). We found that parents’ use of devices and children’s own screen time both have negative effects on children’s psychosocial health. It is important for parents to set “device-free times” each day so as to ensure enough relaxed downtime for connecting with children and responding to their needs.

Keywords: early childhood, electronic device, psychosocial wellbeing, parenting

Procedia PDF Downloads 143
495 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin

Authors: Mohamed A. Saleem

Abstract:

Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.

Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data

Procedia PDF Downloads 92
494 Expression of Ki-67 in Multiple Myeloma: A Clinicopathological Study

Authors: Kangana Sengar, Sanjay Deb, Ramesh Dawar

Abstract:

Introduction: Ki-67 can be a useful marker in determining proliferative activity in patients with multiple myeloma (MM). However, using Ki-67 alone results in the erroneous inclusion of non-myeloma cells leading to false high counts. We have used Dual IHC (immunohistochemistry) staining with Ki-67 and CD138 to enhance specificity in assessing proliferative activity of bone marrow plasma cells. Aims and objectives: To estimate the proportion of proliferating (Ki-67 expressing) plasma cells in patients with MM and correlation of Ki-67 with other known prognostic parameters. Materials and Methods: Fifty FFPE (formalin fixed paraffin embedded) blocks of trephine biopsies of cases diagnosed as MM from 2010 to 2015 are subjected to H & E staining and Dual IHC staining for CD 138 and Ki-67. H & E staining is done to evaluate various histological parameters like percentage of plasma cells, pattern of infiltration (nodular, interstitial, mixed and diffuse), routine parameters of marrow cellularity and hematopoiesis. Clinical data is collected from patient records from Medical Record Department. Each of CD138 expressing cells (cytoplasmic, red) are scored as proliferating plasma cells (containing a brown Ki¬67 nucleus) or non¬proliferating plasma cells (containing a blue, counter-stained, Ki-¬67 negative nucleus). Ki-67 is measured as percentage positivity with a maximum score of hundred percent and lowest of zero percent. The intensity of staining is not relevant. Results: Statistically significant correlation of Ki-67 in D-S Stage (Durie & Salmon Stage) I vs. III (p=0.026) and ISS (International Staging System) Stage I vs. III (p=0.019), β2m (p=0.029) and percentage of plasma cells (p < 0.001) is seen. No statistically significant correlation is seen between Ki-67 and hemoglobin, platelet count, total leukocyte count, total protein, albumin, S. calcium, S. creatinine, S. LDH, blood urea and pattern of infiltration. Conclusion: Ki-67 index correlated with other known prognostic parameters. However, it is not determined routinely in patients with MM due to little information available regarding its relevance and paucity of studies done to correlate with other known prognostic factors in MM patients. To the best of our knowledge, this is the first study in India using Dual IHC staining for Ki-67 and CD138 in MM patients. Routine determination of Ki-67 will help to identify patients who may benefit with more aggressive therapy. Recommendation: In this study follow up of patients is not included, and the sample size is small. Studying with larger sample size and long follow up is advocated to prognosticate Ki-67 as a marker of survival in patients with multiple myeloma.

Keywords: bone marrow, dual IHC, Ki-67, multiple myeloma

Procedia PDF Downloads 128
493 Studying the Effect of Reducing Thermal Processing over the Bioactive Composition of Non-Centrifugal Cane Sugar: Towards Natural Products with High Therapeutic Value

Authors: Laura Rueda-Gensini, Jader Rodríguez, Juan C. Cruz, Carolina Munoz-Camargo

Abstract:

There is an emerging interest in botanicals and plant extracts for medicinal practices due to their widely reported health benefits. A large variety of phytochemicals found in plants have been correlated with antioxidant, immunomodulatory, and analgesic properties, which makes plant-derived products promising candidates for modulating the progression and treatment of numerous diseases. Non-centrifugal cane sugar (NCS), in particular, has been known for its high antioxidant and nutritional value, but composition-wise variability due to changing environmental and processing conditions have considerably limited its use in the nutraceutical and biomedical fields. This work is therefore aimed at assessing the effect of thermal exposure during NCS production over its bioactive composition and, in turn, its therapeutic value. Accordingly, two modified dehydration methods are proposed that employ: (i) vacuum-aided evaporation, which reduces the necessary temperatures to dehydrate the sample, and (ii) window refractance evaporation, which reduces thermal exposure time. The biochemical composition of NCS produced under these two methods was compared to traditionally-produced NCS by estimating their total polyphenolic and protein content with Folin-Ciocalteu and Bradford assays, as well as identifying the major phenolic compounds in each sample via HPLC-coupled mass spectrometry. Their antioxidant activities were also compared as measured by their scavenging potential of ABTS and DPPH radicals. Results show that the two modified production methods enhance polyphenolic and protein yield in resulting NCS samples when compared to traditional production methods. In particular, reducing employed temperatures with vacuum-aided evaporation demonstrated to be superior at preserving polyphenolic compounds, as evidenced both in the total and individual polyphenol concentrations. However, antioxidant activities were not significantly different between these. Although additional studies should be performed to determine if the observed compositional differences affect other therapeutic activities (e.g., anti-inflammatory, analgesic, and immunoprotective), these results suggest that reducing thermal exposure holds great promise for the production of natural products with enhanced nutritional value.

Keywords: non-centrifugal cane sugar, polyphenolic compounds, thermal processing, antioxidant activity

Procedia PDF Downloads 74
492 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 88
491 Development of an Integrated Reaction Design for the Enzymatic Production of Lactulose

Authors: Natan C. G. Silva, Carlos A. C. Girao Neto, Marcele M. S. Vasconcelos, Luciana R. B. Goncalves, Maria Valderez P. Rocha

Abstract:

Galactooligosaccharides (GOS) are sugars with prebiotic function that can be synthesized chemically or enzymatically, and this last one can be promoted by the action of β-galactosidases. In addition to favoring the transgalactosylation reaction to form GOS, these enzymes can also catalyze the hydrolysis of lactose. A highly studied type of GOS is lactulose because it presents therapeutic properties and is a health promoter. Among the different raw materials that can be used to produce lactulose, whey stands out as the main by-product of cheese manufacturing, and its discarded is harmful to the environment due to the residual lactose present. Therefore, its use is a promising alternative to solve this environmental problem. Thus, lactose from whey is hydrolyzed into glucose and galactose by β-galactosidases. However, in order to favor the transgalactosylation reaction, the medium must contain fructose, due this sugar reacts with galactose to produce lactulose. Then, the glucose-isomerase enzyme can be used for this purpose, since it promotes the isomerization of glucose into fructose. In this scenario, the aim of the present work was first to develop β-galactosidase biocatalysts of Kluyveromyces lactis and to apply it in the integrated reactions of hydrolysis, isomerization (with the glucose-isomerase from Streptomyces murinus) and transgalactosylation reaction, using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% glutaraldehyde was evaluated using different enzymatic loads (2, 5, 7, 10, and 12 mg/g). Subsequently, the hydrolysis and transgalactosylation reactions were studied and conducted at 50°C, 120 RPM for 20 minutes. In parallel, the isomerization of glucose into fructose was evaluated under conditions of 70°C, 750 RPM for 90 min. After, the integration of the three processes for the production of lactulose was investigated. Among the evaluated loads, 7 mg/g was chosen because the best activity of the derivative (44.3 U/g) was obtained, being this parameter determinant for the reaction stages. The other parameters of immobilization yield (87.58%) and recovered activity (46.47%) were also satisfactory compared to the other conditions. Regarding the integrated process, 94.96% of lactose was converted, achieving 37.56 g/L and 37.97 g/L of glucose and galactose, respectively. In the isomerization step, conversion of 38.40% of glucose was observed, obtaining a concentration of 12.47 g/L fructose. In the transgalactosylation reaction was produced 13.15 g/L lactulose after 5 min. However, in the integrated process, there was no formation of lactulose, but it was produced other GOS at the same time. The high galactose concentration in the medium probably favored the reaction of synthesis of these other GOS. Therefore, the integrated process proved feasible for possible production of prebiotics. In addition, this process can be economically viable due to the use of an industrial residue as a substrate, but it is necessary a more detailed investigation of the transgalactosilation reaction.

Keywords: beta-galactosidase, glucose-isomerase, galactooligosaccharides, lactulose, whey

Procedia PDF Downloads 119
490 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1456
489 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 127
488 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 182
487 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy

Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright

Abstract:

The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.

Keywords: information entropy, communication in manufacturing, mass customisation, scheduling

Procedia PDF Downloads 229
486 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 507
485 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 405
484 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 111
483 Exploration of the Possible Link Between Emotional Problems and Cholesterol Levels Among Children Diagnosed with Attention-Deficit Hyperactivity Disorder

Authors: Rosa S. Wong, Keith T.S. Tung, H.W. Tsang, Frederick K. Ho, Patrick Ip

Abstract:

Attention-deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention and hyperactive-impulsive behavior. Evidence shows that ADHD and mood problems such as depression and anxiety often co-occur and yet not everyone with ADHD reported elevated emotional problems. Given that cholesterol is essential for healthy brain development including the regions governing emotion regulation, reports found lower cholesterol levels in patients with major depressive disorder and those with suicide attempt behavior compared to healthy subjects. This study explored whether ADHD adolescents experienced more emotional problems and whether emotional problems correlated with cholesterol levels in these adolescents. This study used a portion of data from the longitudinal cohort study which was designed to investigate the long-term impact of family socioeconomic status on child development. In 2018/19, parents of 300 adolescents (average age: 12.57+/-0.49 years) were asked to rate their children’s emotional problems and report whether their children had doctor-diagnosed psychiatric diseases. We further collected blood samples from 263 children to study their lipid profile (total cholesterol, high-density lipoprotein (HDL)-cholesterol, and low-density lipoprotein (LDL)-cholesterol). Regression analyses were performed to test the relationships between variables of interest. Among 300 children, 27 (9%) had ADHD diagnosis. Analysis based on overall sample found no association between ADHD and emotional problems, but when investigating the relationship by gender, there was a significant interaction effect of ADHD and gender on emotional problems (p=0.037), with ADHD males displaying more emotional problems than ADHD females. Further analyses based on 263 children (21 with ADHD diagnosis) found significant interaction effect of ADHD and gender on total cholesterol (p=0.038) and low LDL-cholesterol levels (p=0.013) after adjusting for the child’s physical disease history. Specifically, ADHD males had significantly lower total cholesterol and low lipoprotein-cholesterol levels than ADHD females. In ADHD males, more emotional problems were associated with lower LDL-cholesterol levels (B = -4.26, 95%CI (-7.46, -1.07), p=0.013). We found preliminary support for the association between more emotional problems and lower cholesterol levels in ADHD children, especially among males. Although larger prospective studies are needed to substantiate these claims, the evidence highlights the importance of healthy lifestyle to keep cholesterol levels in normal range which can have positive effects on physical and mental health.

Keywords: attention-deficit hyperactivity disorder, cholesterol, emotional problems, adolescents

Procedia PDF Downloads 130
482 Internal Mercury Exposure Levels Correlated to DNA Methylation of Imprinting Gene H19 in Human Sperm of Reproductive-Aged Man

Authors: Zhaoxu Lu, Yufeng Ma, Linying Gao, Li Wang, Mei Qiang

Abstract:

Mercury (Hg) is a well-recognized environmental pollutant known by its toxicity of development and neurotoxicity, which may result in adverse health outcomes. However, the mechanisms underlying the teratogenic effects of Hg are not well understood. Imprinting genes are emerging regulators for fetal development subject to environmental pollutants impacts. In this study, we examined the association between paternal preconception Hg exposures and the alteration of DNA methylation of imprinting genes in human sperm DNA. A total of 618 men aged from 22 to 59 was recruited from the Reproductive Medicine Clinic of Maternal and Child Care Service Center and the Urologic Surgery Clinic of Shanxi Academy of Medical Sciences during April 2015 and March 2016. Demographic information was collected using questionnaires. Urinary Hg concentrations were measured using a fully-automatic double-channel hydride generation atomic fluorescence spectrometer. And methylation status in the DMRs of imprinting genes H19, Meg3 and Peg3 of sperm DNA were examined by bisulfite pyrosequencing in 243 participants. Spearman’s rank and multivariate regression analysis were used for correlation analysis between sperm DNA methylation status of imprinting genes and urinary Hg levels. The median concentration of Hg for participants overall was 9.09μg/l (IQR: 5.54 - 12.52μg/l; range = 0 - 71.35μg/l); no significant difference was found in median concentrations of Hg among various demographic groups (p > 0.05). The proportion of samples that a beyond intoxication criterion (10μg/l) for urinary Hg was 42.6%. Spearman’s rank correlation analysis indicates a negative correlation between urinary Hg concentrations and average DNA methylation levels in the DMRs of imprinted genes H19 (rs=﹣0.330, p = 0.000). However, there was no such a correlation found in genes of Peg3 and Meg3. Further, we analyzed of correlation between methylation level at each CpG site of H19 and Hg level, the results showed that three out of 7 CpG sites on H19 DMR, namely CpG2 (rs =﹣0.138, p = 0.031), CpG4 (rs =﹣0.369, p = 0.000) and CpG6 (rs=﹣0.228, p = 0.000), demonstrated a significant negative correlation between methylation levels and the levels of urinary Hg. After adjusting age, smoking, drinking, intake of aquatic products and education by multivariate regression analysis, the results have shown a similar correlation. In summary, mercury nonoccupational environmental exposure in reproductive-aged men associated with altered DNA methylation outcomes at DMR of imprinting gene H19 in sperm, implicating the susceptibility of the developing sperm for environmental insults.

Keywords: epigenetics, genomic imprinting gene, DNA methylation, mercury, transgenerational effects, sperm

Procedia PDF Downloads 238
481 System Devices to Reduce Particulate Matter Concentrations in Railway Metro Systems

Authors: Armando Cartenì

Abstract:

Within the design of sustainable transportation engineering, the problem of reducing particulate matter (PM) concentrations in railways metro system was not much discussed. It is well known that PM levels in railways metro system are mainly produced by mechanical friction at the rail-wheel-brake interactions and by the PM re-suspension caused by the turbulence generated by the train passage, which causes dangerous problems for passenger health. Starting from these considerations, the aim of this research was twofold: i) to investigate the particulate matter concentrations in a ‘traditional’ railways metro system; ii) to investigate the particulate matter concentrations of a ‘high quality’ metro system equipped with design devices useful for reducing PM concentrations: platform screen doors, rubber-tyred and an advanced ventilation system. Two measurement surveys were performed: one in the ‘traditional’ metro system of Naples (Italy) and onother in the ‘high quality’ rubber-tyred metro system of Turin (Italy). Experimental results regarding the ‘traditional’ metro system of Naples, show that the average PM10 concentrations measured in the underground station platforms are very high and range between 172 and 262 µg/m3 whilst the average PM2,5 concentrations range between 45 and 60 µg/m3, with dangerous problems for passenger health. By contrast the measurements results regarding the ‘high quality’ metro system of Turin show that: i) the average PM10 (PM2.5) concentrations measured in the underground station platform is 22.7 µg/m3 (16.0 µg/m3) with a standard deviation of 9.6 µg/m3 (7.6 µg/m3); ii) the indoor concentrations (both for PM10 and for PM2.5) are statistically lower from those measured in outdoors (with a ratio equal to 0.9-0.8), meaning that the indoor air quality is greater than those in urban ambient; iii) that PM concentrations in underground stations are correlated to the trains passage; iv) the inside trains concentrations (both for PM10 and for PM2.5) are statistically lower from those measured at station platform (with a ratio equal to 0.7-0.8), meaning that inside trains the use of air conditioning system could promote a greater circulation that clean the air. The comparison among the two case studies allow to conclude that the metro system designed with PM reduction devices allow to reduce PM concentration up to 11 times against a ‘traditional’ one. From these results, it is possible to conclude that PM concentrations measured in a ‘high quality’ metro system are significantly lower than the ones measured in a ‘traditional’ railway metro systems. This result allows possessing the bases for the design of useful devices for retrofitting metro systems all around the world.

Keywords: air quality, pollutant emission, quality in public transport, underground railway, external cost reduction, transportation planning

Procedia PDF Downloads 192
480 The Geometrical Cosmology: The Projective Cast of the Collective Subjectivity of the Chinese Traditional Architectural Drawings

Authors: Lina Sun

Abstract:

Chinese traditional drawings related to buildings and construction apply a unique geometry differentiating with western Euclidean geometry and embrace a collection of special terminologies, under the category of tu (the Chinese character for drawing). This paper will on one side etymologically analysis the terminologies of Chinese traditional architectural drawing, and on the other side geometrically deconstruct the composition of tu and locate the visual narrative language of tu in the pictorial tradition. The geometrical analysis will center on selected series of Yang-shi-lei tu of the construction of emperors’ mausoleums in Qing Dynasty (1636-1912), and will also draw out the earlier architectural drawings and the architectural paintings such as the jiehua, and paintings on religious frescoes and tomb frescoes as the comparison. By doing these, this research will reveal that both the terminologies corresponding to different geometrical forms respectively indicate associations between architectural drawing and the philosophy of Chinese cosmology, and the arrangement of the geometrical forms in the visual picture plane facilitates expressions of the concepts of space and position in the geometrical cosmology. These associations and expressions are the collective intentions of architectural drawing evolving in the thousands of years’ tradition without breakage and irrelevant to the individual authorship. Moreover, the architectural tu itself as an entity, not only functions as the representation of the buildings but also express intentions and strengthen them by using the Chinese unique geometrical language flexibly and intentionally. These collective cosmological spatial intentions and the corresponding geometrical words and languages reveal that the Chinese traditional architectural drawing functions as a unique architectural site with subjectivity which exists parallel with buildings and express intentions and meanings by itself. The methodology and the findings of this research will, therefore, challenge the previous researches which treat architectural drawings just as the representation of buildings and understand the drawings more than just using them as the evidence to reconstruct the information of buildings. Furthermore, this research will situate architectural drawing in between the researches of Chinese technological tu and artistic painting, bridging the two academic areas which usually treated the partial features of architectural drawing separately. Beyond this research, the collective subjectivity of the Chinese traditional drawings will facilitate the revealing of the transitional experience from traditions to drawing modernity, where the individual subjective identities and intentions of architects arise. This research will root for the understanding both the ambivalence and affinity of the drawing modernity encountering the traditions.

Keywords: Chinese traditional architectural drawing (tu), etymology of tu, collective subjectivity of tu, geometrical cosmology in tu, geometry and composition of tu, Yang-shi-lei tu

Procedia PDF Downloads 104
479 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 152
478 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 126
477 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 142
476 Experimental Study on Heat and Mass Transfer of Humidifier for Fuel Cell

Authors: You-Kai Jhang, Yang-Cheng Lu

Abstract:

Major contributions of this study are threefold: designing a new model of planar-membrane humidifier for Proton Exchange Membrane Fuel Cell (PEMFC), an index to measure the Effectiveness (εT) of that humidifier, and an air compressor system to replicate related planar-membrane humidifier experiments. PEMFC as a kind of renewable energy has become more and more important in recent years due to its reliability and durability. To maintain the efficiency of the fuel cell, the membrane of PEMFC need to be controlled in a good hydration condition. How to maintain proper membrane humidity is one of the key issues to optimize PEMFC. We developed new humidifier to recycle water vapor from cathode air outlet so as to keep the moisture content of cathode air inlet in a PEMFC. By measuring parameters such as dry side air outlet dew point temperature, dry side air inlet temperature and humidity, wet side air inlet temperature and humidity, and differential pressure between dry side and wet side, we calculated indices obtained by dew point approach temperature (DPAT), water flux (J), water recovery ratio (WRR), effectiveness (εT), and differential pressure (ΔP). We discussed six topics including sealing effect, flow rate effect, flow direction effect, channel effect, temperature effect, and humidity effect by using these indices. Gas cylinders are used as sources of air supply in many studies of humidifiers. Gas cylinder depletes quickly during experiment at 1kW air flow rate, and it causes replication difficult. In order to ensure high stable air quality and better replication of experimental data, this study designs an air supply system to overcome this difficulty. The experimental result shows that the best rate of pressure loss of humidifier is 0.133×10³ Pa(g)/min at the torque of 25 (N.m). The best humidifier performance ranges from 30-40 (LPM) of air flow rates. The counter flow configured humidifies moisturizes the dry side inlet air more effectively than the parallel flow humidifier. From the performance measurements of the channel plates various rib widths studied in this study, it is found that the narrower the rib width is, the more the performance of humidifier improves. Raising channel width in same hydraulic diameter (Dh ) will obtain higher εT and lower ΔP. Moreover, increasing the dry side air inlet temperature or humidity will lead to lower εT. In addition, when the dry side air inlet temperature exceeds 50°C, the effect becomes even more obvious.

Keywords: PEM fuel cell, water management, membrane humidifier, heat and mass transfer, humidifier performance

Procedia PDF Downloads 153
475 CFD Modeling of Stripper Ash Cooler of Circulating Fluidized Bed

Authors: Ravi Inder Singh

Abstract:

Due to high heat transfer rate, high carbon utilizing efficiency, fuel flexibilities and other advantages numerous circulating fluidized bed boilers have grown up in India in last decade. Many companies like BHEL, ISGEC, Thermax, Cethar Limited, Enmas GB Power Systems Projects Limited are making CFBC and installing the units throughout the India. Due to complexity many problems exists in CFBC units and only few have been reported. Agglomeration i.e clinker formation in riser, loop seal leg and stripper ash coolers is one of problem industry is facing. Proper documentation is rarely found in the literature. Circulating fluidized bed (CFB) boiler bottom ash contains large amounts of physical heat. While the boiler combusts the low-calorie fuel, the ash content is normally more than 40% and the physical heat loss is approximately 3% if the bottom ash is discharged without cooling. In addition, the red-hot bottom ash is bad for mechanized handling and transportation, as the upper limit temperature of the ash handling machinery is 200 °C. Therefore, a bottom ash cooler (BAC) is often used to treat the high temperature bottom ash to reclaim heat, and to have the ash easily handled and transported. As a key auxiliary device of CFB boilers, the BAC has a direct influence on the secure and economic operation of the boiler. There are many kinds of BACs equipped for large-scale CFB boilers with the continuous development and improvement of the CFB boiler. These ash coolers are water cooled ash cooling screw, rolling-cylinder ash cooler (RAC), fluidized bed ash cooler (FBAC).In this study prototype of a novel stripper ash cooler is studied. The Circulating Fluidized bed Ash Coolers (CFBAC) combined the major technical features of spouted bed and bubbling bed, and could achieve the selective discharge on the bottom ash. The novel stripper ash cooler is bubbling bed and it is visible cold test rig. The reason for choosing cold test is that high temperature is difficult to maintain and create in laboratory level. The aim of study to know the flow pattern inside the stripper ash cooler. The cold rig prototype is similar to stripper ash cooler used industry and it was made after scaling down to some parameter. The performance of a fluidized bed ash cooler is studied using a cold experiment bench. The air flow rate, particle size of the solids and air distributor type are considered to be the key parameters of the operation of a fluidized bed ash cooler (FBAC) are studied in this.

Keywords: CFD, Eulerian-Eulerian, Eulerian-Lagraingian model, parallel simulations

Procedia PDF Downloads 498
474 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 242
473 Detection, Isolation, and Raman Spectroscopic Characterization of Acute and Chronic Staphylococcus aureus Infection in an Endothelial Cell Culture Model

Authors: Astrid Tannert, Anuradha Ramoji, Christina Ebert, Frederike Gladigau, Lorena Tuchscherr, Jürgen Popp, Ute Neugebauer

Abstract:

Staphylococcus aureus is a facultative intracellular pathogen, which by entering host cells may evade immunologic host response as well as antimicrobial treatment. In that way, S. aureus can cause persistent intracellular infections which are difficult to treat. Depending on the strain, S. aureus may persist at different intracellular locations like the phagolysosome. The first barrier invading pathogens from the blood stream that they have to cross are the endothelial cells lining the inner surface of blood and lymphatic vessels. Upon proceeding from an acute to a chronic infection, intracellular pathogens undergo certain biochemical and structural changes including a deceleration of metabolic processes to adopt for long-term intracellular survival and the development of a special phenotype designated as small colony variant. In this study, the endothelial cell line Ea.hy 926 was used as a model for acute and chronic S. aureus infection. To this end, Ea.hy 926 cells were cultured on QIAscout™ Microraft Arrays, a special graded cell culture substrate that contains around 12,000 microrafts of 200 µm edge length. After attachment to the substrate, the endothelial cells were infected with GFP-expressing S. aureus for 3 weeks. The acute infection and the development of persistent bacteria was followed by confocal laser scanning microscopy, scanning the whole Microraft Array for the presence and for detailed determination of the intracellular location of fluorescent intracellular bacteria every second day. After three weeks of infection representative microrafts containing infected cells, cells with protruded infections and cells that did never show any infection were isolated and fixed for Raman micro-spectroscopic investigation. For comparison, also microrafts with acute infection were isolated. The acquired Raman spectra are correlated with the fluorescence microscopic images to give hints about a) the molecular alterations in endothelial cells during acute and chronic infection compared to non-infected cells, and b) metabolic and structural changes within the pathogen when entering a mode of persistence within host cells. We thank Dr. Ruth Kläver from QIAGEN GmbH for her support regarding QIAscout technology. Financial support by the BMBF via the CSCC (FKZ 01EO1502) and from the DFG via the Jena Biophotonic and Imaging Laboratory (JBIL, FKZ PO 633/29-1, BA 1601/10-1) is highly acknowledged.

Keywords: correlative image analysis, intracellular infection, pathogen-host adaption, Raman micro-spectroscopy

Procedia PDF Downloads 165
472 Factors Influencing Capital Structure: Evidence from the Oil and Gas Industry of Pakistan

Authors: Muhammad Tahir, Mushtaq Muhammad

Abstract:

Capital structure is one of the key decisions taken by the financial managers. This study aims to investigate the factors influencing capital structure decision in Oil and Gas industry of Pakistan using secondary data from published annual reports of listed Oil and Gas Companies of Pakistan. This study covers the time-period from 2008-2014. Capital structure can be affected by profitability, firm size, growth opportunities, dividend payout, liquidity, business risk, and ownership structure. Panel data technique with Ordinary least square (OLS) regression model has been used to find the impact of set of explanatory variables on the capital structure using the Stata. OLS regression results suggest that dividend payout, firm size and government ownership have the most significant impact on financial leverage. Dividend payout and government ownership are found to have significant negative association with financial leverage however firm size indicated positive relationship with financial leverage. Other variables having significant link with financial leverage includes growth opportunities, liquidity and business risk. Results reveal significant positive association between growth opportunities and financial leverage whereas liquidity and business risk are negatively correlated with financial leverage. Profitability and managerial ownership exhibited insignificant relationship with financial leverage. This study contributes to existing Managerial Finance literature with certain managerial implications. Academically, this research study describes the factors affecting capital structure decision of Oil and Gas Companies in Pakistan and adds latest empirical evidence to existing financial literature in Pakistan. Researchers have studies capital structure in Pakistan in general and industry at specific, nevertheless still there is limited literature on this issue. This study will be an attempt to fill this gap in the academic literature. This study has practical implication on both firm level and individual investor/ lenders level. Results of this study can be useful for investors/ lenders in making investment and lending decisions. Further, results of this study can be useful for financial managers to frame optimal capital structure keeping in consideration the factors that can affect capital structure decision as revealed by this study. These results will help financial managers to decide whether to issue stock or issue debt for future investment projects.

Keywords: capital structure, multicollinearity, ordinary least square (OLS), panel data

Procedia PDF Downloads 277