Search results for: five factor model of personality
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20767

Search results for: five factor model of personality

4717 A Natural Killer T Cell Subset That Protects against Airway Hyperreactivity

Authors: Ya-Ting Chuang, Krystle Leung, Ya-Jen Chang, Rosemarie H. DeKruyff, Paul B. Savage, Richard Cruse, Christophe Benoit, Dirk Elewaut, Nicole Baumgarth, Dale T. Umetsu

Abstract:

We examined characteristics of a Natural Killer T (NKT) cell subpopulation that developed during influenza infection in neonatal mice, and that suppressed the subsequent development of allergic asthma in a mouse model. This NKT cell subset expressed CD38 but not CD4, produced IFN-γ, but not IL-17, IL-4 or IL-13, and inhibited the development of airway hyperreactivity (AHR) through contact-dependent suppressive activity against helper CD4 T cells. The NKT subset expanded in the lungs of neonatal mice after infection with influenza, but also after treatment of neonatal mice with a Th1-biasing α-GalCer glycolipid analogue, Nu-α-GalCer. These results suggest that early/neonatal exposure to infection or to antigenic challenge can affect subsequent lung immunity by altering the profile of cells residing in the lung and that some subsets of NKT cells can have direct inhibitory activity against CD4+ T cells in allergic asthma. Importantly, our results also suggest a potential therapy for young children that might provide protection against the development of asthma.

Keywords: NKT subset, asthma, airway hyperreactivity, hygiene hypothesis, influenza

Procedia PDF Downloads 222
4716 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers

Authors: M. Sarraf, J. E. Moros, M. C. Sánchez

Abstract:

Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.

Keywords: basil seed gum, particle size, viscoelastic properties, whey protein

Procedia PDF Downloads 53
4715 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 216
4714 Analysis of Moment Rotation Curve for Steel Beam Column Joint

Authors: A. J. Shah, G. R. Vesmawala

Abstract:

Connections perform a fundamental role in the steel structures as global behaviour. In order to evaluate the real influence of the physical and geometrical parameters that control their behaviour, many experimental tests and analysis have been developed but a definitive answer to the problem in question still stands. Here, various configurations of bolts were tried and the resulting moment rotation (M-θ) curves were plotted. The connection configuration is such that two bolts are located above each of the flanges and beside each of the webs. The model considers the combined effects of prying action, the formation of yield lines, and failures due to punching shear and beam section failure. For many types of connections, the stiffness at the service load level falls somewhere in between the fully restrained and simple limits and designers need to account for its behaviour. The (M-θ) curves are generally assumed to be the best characterization of connection behaviour. The moment rotation curves are generally derived from experiments on cantilever type specimens. The moments are calculated directly from the statics of the specimen, while the rotations are measured over a distance typically equal to the point of loading. Thus, this paper establishes the relationship between M-θ behaviour of different types of connections tested and presents the relative strength of various possible arrangements of bolts.

Keywords: bolt, moment, rotation, stiffness, connections

Procedia PDF Downloads 381
4713 Glocalization of Journalism and Mass Communication Education: Best Practices from an International Collaboration on Curriculum Development

Authors: Bellarmine Ezumah, Michael Mawa

Abstract:

Glocalization is often defined as the practice of conducting business according to both local and global considerations – this epitomizes the curriculum co-development collaboration between a journalism and mass communications professor from a university in the United States and the Uganda Martyrs University in Uganda where a brand new journalism and mass communications program was recently co-developed. This paper presents the experiences and research result of this initiative which was funded through the Institute of International Education (IIE) under the umbrella of the Carnegie African Diaspora Fellowship Program (CADFP). Vital international and national concerns were addressed. On a global level, scholars have questioned and criticized the general Western-module ingrained in journalism and mass communication curriculum and proposed a decolonization of journalism curricula. Another major criticism is the concept of western-based educators transplanting their curriculum verbatim to other regions of the world without paying greater attention to the local needs. To address these two global concerns, an extensive assessment of local needs was conducted prior to the conceptualization of the new program. The assessment of needs adopted a participatory action model and captured the knowledge and narratives of both internal and external stakeholders. This involved review of pertinent documents including the nation’s constitution, governmental briefs, and promulgations, interviews with governmental officials, media and journalism educators, media practitioners, students, and benchmarking the curriculum of other tertiary institutions in the nation. Information gathered through this process served as blueprint and frame of reference for all design decisions. In the area of local needs, four key factors were addressed. First, the realization that most media personnel in Uganda are both academically and professionally unqualified. Second, the practitioners with academic training were found lacking in experience. Third, the current curricula offered at several tertiary institutions are not comprehensive and lack local relevance. The project addressed these problems thus: first, the program was designed to cater to both traditional and non-traditional students offering opportunities for unqualified media practitioners to get their formal training through evening and weekender programs. Secondly, the challenge of inexperienced graduates was mitigated by designing the program to adopt the experiential learning approach which many refer to as the ‘Teaching Hospital Model’. This entails integrating practice to theory - similar to the way medical students engage in hands-on practice under the supervision of a mentor. The university drew a Memorandum of Understanding (MoU) with reputable media houses for students and faculty to use their studios for hands-on experience and for seasoned media practitioners to guest-teach some courses. With the convergence functions of media industry today, graduates should be trained to have adequate knowledge of other disciplines; therefore, the curriculum integrated cognate courses that would render graduates versatile. Ultimately, this research serves as a template for African colleges and universities to follow in their quest to glocalize their curricula. While the general concept of journalism may remain western, journalism curriculum developers in Africa through extensive assessment of needs, and focusing on those needs and other societal particularities, can adjust the western module to fit their local needs.

Keywords: curriculum co-development, glocalization of journalism education, international journalism, needs assessment

Procedia PDF Downloads 119
4712 Gasification of Trans-4-Hydroxycinnamic Acid with Ethanol at Elevated Temperatures

Authors: Shyh-Ming Chern, Wei-Ling Lin

Abstract:

Lignin is a major constituent of woody biomass, and exists abundantly in nature. It is the major byproducts from the paper industry and bioethanol production processes. The byproducts are mainly used for low-valued applications. Instead, lignin can be converted into higher-valued gaseous fuel, thereby helping to curtail the ever-growing price of oil and to slow down the trend of global warming. Although biochemical treatment is capable of converting cellulose into liquid ethanol fuel, it cannot be applied to the conversion of lignin. Alternatively, it is possible to convert lignin into gaseous fuel thermochemically. In the present work, trans-4-hydroxycinnamic acid, a model compound for lignin, which closely resembles the basic building blocks of lignin, is gasified in an autoclave with ethanol at elevated temperatures and pressures, that are above the critical point of ethanol. Ethanol, instead of water, is chosen, because ethanol dissolves trans-4-hydroxycinnamic acid easily and helps to convert it into lighter gaseous species relatively well. The major operating parameters for the gasification reaction include temperature (673-873 K), reaction pressure (5-25 MPa) and feed concentration (0.05-0.3 M). Generally, more than 80% of the reactant, including trans-4-hydroxycinnamic acid and ethanol, were converted into gaseous products at an operating condition of 873 K and 5 MPa.

Keywords: ethanol, gasification, lignin, supercritical

Procedia PDF Downloads 225
4711 A Daily Diary Study on Technology-Assisted Supplemental Work, Psychological Detachment, and Well-Being – The Mediating Role of Cognitive Coping

Authors: Clara Eichberger, Daantje Derks, Hannes Zacher

Abstract:

Technology-assisted supplemental work (TASW) involves performing job-related tasks after regular working hours with the help of technological devices. Due to emerging information and communication technologies, such behavior becomes increasingly common. Since previous research on the relationship of TASW, psychological detachment and well-being are mixed, this study aimed to examine the moderating roles of appraisal and cognitive coping. A moderated mediation model was tested with daily diary data from 100 employees. As hypothesized, TASW was positively related to negative affect at bedtime. In addition, psychological detachment mediated this relationship. Results did not confirm appraisal and cognitive coping as moderators. However, additional analyses revealed cognitive coping as a mediator of the positive relationship of TASW and positive affect at bedtime. These results suggest that, on the one hand engaging in TASW can be harmful to employee well-being (i.e., more negative affect) and on the other hand, it can also be associated with higher well-being (i.e., more positive affect) in case it is accompanied by cognitive coping.

Keywords: cognitive coping, psychological detachment, technology-assisted supplemental work, well-being

Procedia PDF Downloads 173
4710 Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network

Authors: Shoujia Fang, Guoqing Ding, Xin Chen

Abstract:

The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality.

Keywords: keypoint detection, curve feature, convolutional neural network, press-fit assembly

Procedia PDF Downloads 211
4709 A phytochemical and Biological Study of Viscum schemperi Engl. Growing in Saudi Arabia

Authors: Manea A. I. Alqrad, Alaa Sirwi, Sabrin R. M. Ibrahim, Hossam M. Abdallah, Gamal A. Mohamed

Abstract:

Phytochemical study of the methanolic extract of the air dried powdered of the parts of Viscum schemperi Engl. (Family: Viscaceae) using different chromatographic techniques led to the isolation of five compounds: -amyrenone (1), betulinic acid (2), (3β)-olean-12-ene-3,23-diol (3), -oleanolic acid (4), and α-oleanolic acid (5). Their structures were established based on physical, chemical, and spectral data. Anti-inflammatory and anti-apoptotic activities of oleanolic acid in a mouse model of acute hepatorenal damage were assessed. This study showed the efficacy of oleanolic acid to counteract thioacetamide-induced hepatic and kidney injury in mice through the reduction of hepatocyte oxidative damage, suppression of inflammation, and apoptosis. More importantly, oleanolic acid suppressed thioacetamide-induced hepatic and kidney injury by inhibiting NF-κB/TNF-α-mediated inflammation/apoptosis and enhancing SIRT1/Nrf2/Heme-oxygenase signalling pathway. These promising pharmacological activities suggest the potential use of oleanolic acid against hepatorenal damage.

Keywords: oleanolic acid, viscum schimperi, thioacetamide, SIRT1/Nrf2/NF-κB, hepatorenal damage

Procedia PDF Downloads 81
4708 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 271
4707 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss

Procedia PDF Downloads 460
4706 Inflammatory and Cardio Hypertrophic Remodeling Biomarkers in Patients with Fabry Disease

Authors: Margarita Ivanova, Julia Dao, Andrew Friedman, Neil Kasaci, Rekha Gopal, Ozlem Goker-Alpan

Abstract:

In Fabry disease (FD), α-galactosidase A (α-Gal A) deficiency leads to the accumulation of globotriaosylceramide (Lyso-Gb3 and Gb3), triggering a pathologic cascade that causes the severity of organs damage. The heart is one of the several organs with high sensitivity to the α-Gal A deficiency. A subgroup of patients with significant residual of α-Gal A activity with primary cardiac involvement is occasionally referred to as “cardiac variant.” The cardiovascular complications are most frequently encountered, contributing substantially to morbidity, and are the leading cause of premature death in male and female patients with FD. The deposition of Lyso-Gb-3 and Gb-3 within the myocardium affects cardiac function with resultant progressive cardiovascular pathology. Gb-3 and Lyso-Gb-3 accumulation at the cellular level trigger a cascade of events leading to end-stage fibrosis. In the cardiac tissue, Lyso-Gb-3 deposition is associated with the increased release of inflammatory factors and transforming growth factors. Infiltration of lymphocytes and macrophages into endomyocardial tissue indicates that inflammation plays a significant role in cardiac damage. Moreover, accumulated data suggest that chronic inflammation leads to multisystemic FD pathology even under enzyme replacement therapy (ERT). NF-κB activation plays a subsequent role in the inflammatory response to cardiac dysfunction and advanced heart failure in the general population. TNFalpha/NF-κB signaling protects the myocardial evoking by ischemic preconditioning; however, this protective effect depends on the concentration of TNF-α. Thus, we hypothesize that TNF-α is a critical factor in determining the grade of cardio-pathology. Cardiac hypertrophy corresponds to the expansion of the coronary vasculature to maintain a sufficient supply of nutrients and oxygen. Coronary activation of angiogenesis and fibrosis plays a vital role in cardiac vascularization, hypertrophy, and tissue remodeling. We suggest that the interaction between the inflammatory pathways and cardiac vascularization is a bi-directional process controlled by secreted cytokines and growth factors. The co-coordination of these two processes has never been explored in FD. In a cohort of 40 patients with FD, biomarkers associated with inflammation and cardio hypertrophic remodeling were studied. FD patients were categorized into three groups based on LVmass/DSA, LVEF, and ECG abnormalities: FD with no cardio complication, FD with moderate cardio complication, and severe cardio complication. Serum levels of NF-kB, TNFalpha, Il-6, Il-2, MCP1, ING-gamma, VEGF, IGF-1, TGFβ, and FGF2 were quantified by enzyme-linked immunosorbent assays (ELISA). Among the biomarkers, MCP-1, INF-gamma, VEGF, TNF-alpha, and TGF-beta were elevated in FD patients. Some of these biomarkers also have the potential to correlate with cardio pathology in FD. Conclusion: The study provides information about the role of inflammatory pathways and biomarkers of cardio hypertrophic remodeling in FD patients. This study will also reveal the mechanisms that link intracellular accumulation of Lyso-GB-3 and Gb3 to the development of cardiomyopathy with myocardial thickening and resultant fibrosis.

Keywords: biomarkers, Fabry disease, inflammation, growth factors

Procedia PDF Downloads 69
4705 Surfactant-Free O/W-Emulsion as Drug Delivery System

Authors: M. Kumpugdee-Vollrath, J.-P. Krause, S. Bürk

Abstract:

Most of the drugs used for pharmaceutical purposes are poorly water-soluble drugs. About 40% of all newly discovered drugs are lipophilic and the numbers of lipophilic drugs seem to increase more and more. Drug delivery systems such as nanoparticles, micelles or liposomes are applied to improve their solubility and thus their bioavailability. Besides various techniques of solubilization, oil-in-water emulsions are often used to incorporate lipophilic drugs into the oil phase. To stabilize emulsions surface active substances (surfactants) are generally used. An alternative method to avoid the application of surfactants was of great interest. One possibility is to develop O/W-emulsion without any addition of surface active agents or the so called “surfactant-free emulsion or SFE”. The aim of this study was to develop and characterize SFE as a drug carrier by varying the production conditions. Lidocaine base was used as a model drug. The injection method was developed. Effects of ultrasound as well as of temperature on the properties of the emulsion were studied. Particle sizes and release were determined. The long-term stability up to 30 days was performed. The results showed that the surfactant-free O/W emulsions with pharmaceutical oil as drug carrier can be produced.

Keywords: emulsion, lidocaine, Miglyol, size, surfactant, light scattering, release, injection, ultrasound, stability

Procedia PDF Downloads 471
4704 Developing an Interpretive Plan for Qubbet El-Hawa North Archaeological Site in Aswan, Egypt

Authors: Osama Amer Mohyeldin Mohamed

Abstract:

Qubbet el-Hawa North (QHN) is an example of an archaeological site in West-Aswan and It has not opened to the public yet and has been under excavation since its discovery in 2013 as a result of the illegal digging that happened in many sites in Egypt because of the unstable situation and the absence of security. The site has the potential to be one of the most attractive sites in Aswan. Moreover, it deserves to be introduced to the visitors in a good manner appropriate to its great significance. Both interpretation and presentation are crucial inseparable tools that communicate the archaeological site's significance to the public and raise their awareness. Moreover, it helps them to understand the past and appreciate archaeological assets. People will never learn or see anything from ancient remains unless it is explained. They would only look at it as ancient and charming. They expect a story, and more than knowledge, authenticity, or even supporting preservation actions, they want to enjoy and be entertained. On the other hand, a lot of archaeologists believe that planning an archaeological site for entertaining visitors deteriorates it and affects its authenticity. Thus, it represents a challenge to design a model for visitors’ experience that meets their expectations and needs while safeguarding the site’s integrity. The article presents a proposal for an interpretation plan for the site of Qubbet el-Hawa North.

Keywords: heritage interpretation and presentation, archaeological site management, qubbet el-hawa North, local community engagement, accessibility

Procedia PDF Downloads 13
4703 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 343
4702 Use of Multistage Transition Regression Models for Credit Card Income Prediction

Authors: Denys Osipenko, Jonathan Crook

Abstract:

Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.

Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability

Procedia PDF Downloads 469
4701 Automatic Assignment of Geminate and Epenthetic Vowel for Amharic Text-to-Speech System

Authors: Tadesse Anberbir, Felix Bankole, Tomio Takara, Girma Mamo

Abstract:

In the development of a text-to-speech synthesizer, automatic derivation of correct pronunciation from the grapheme form of a text is a central problem. Particularly deriving phonological features which are not shown in orthography is challenging. In the Amharic language, geminates and epenthetic vowels are very crucial for proper pronunciation but neither is shown in orthography. In this paper, we proposed and integrated a morphological analyzer into an Amharic Text-to-Speech system, mainly to predict geminates and epenthetic vowel positions, and prepared a duration modeling method. Amharic Text-to-Speech system (AmhTTS) is a parametric and rule-based system that adopts a cepstral method and uses a source filter model for speech production and a Log Magnitude Approximation (LMA) filter as the vocal tract filter. The naturalness of the system after employing the duration modeling was evaluated by sentence listening test and we achieved an average Mean Opinion Score (MOS) 3.4 (68%) which is moderate. By modeling the duration of geminates and controlling the locations of epenthetic vowel, we are able to synthesize good quality speech. Our system is mainly suitable to be customized for other Ethiopian languages with limited resources.

Keywords: Amharic, gemination, speech synthesis, morphology, epenthesis

Procedia PDF Downloads 68
4700 The Challenges of Business Incubations: A Case of Malaysian Incubators

Authors: Logaiswari Indiran, Zainab Khalifah, Kamariah Ismail

Abstract:

Business incubators have now been recognized as effective tools in providing business assistance to start-up firms. In both developed and developing countries, the number of incubators is growing tremendously. As the birth rate of incubators increases, so do its challenges. Malaysia, as one of the developing countries in the Asian continent, has also established a number of business incubators to breed and foster the growth and survival of start-up firms. Thus, this study discusses the incubation model applied in Malaysia and the challenges faced by these incubators using secondary data including policies, previous literature, and reports related to Malaysian incubators. The findings of this study call the government to rethink the key role of incubator managers and staffs, internal structure of the incubator concept and process, intellectual properties management, strategic alliances with universities-industries and funding supports in enhancing the support provided by the business incubators in Malaysia. The key challenges highlighted in this study signal important policy lessons for other developing countries that aim to create and map an effective business incubator ecosystem.

Keywords: business incubators, incubation challenges, funding support, incubator managers, internal structure, start-up firms

Procedia PDF Downloads 260
4699 Improvement of Greenhouse Gases Bio-Fixation by Microalgae Using a “Plasmon-Enhanced Photobioreactor”

Authors: Francisco Pereira, António Augusto Vicente, Filipe Vaz, Joel Borges, Pedro Geada

Abstract:

Light is a growth-limiting factor in microalgae cultivation, where factors like spectral components, intensity, and duration, often characterized by its wavelength, are well-reported to have a substantial impact on cell growth rates and, consequently, photosynthetic performance and mitigation of CO2, one of the most significant greenhouse gases (GHGs). Photobioreactors (PBRs) are commonly used to grow microalgae under controlled conditions, but they often fail to provide an even light distribution to the cultures. For this reason, there is a pressing need for innovations aiming at enhancing the efficient utilization of light. So, one potential approach to address this issue is by implementing plasmonic films, such as the localized surface plasmon resonance (LSPR). LSPR is an optical phenomenon connected to the interaction of light with metallic nanostructures. LSPR excitation is characterized by the oscillation of unbound conduction electrons of the nanoparticles coupled with the electromagnetic field from incident light. As a result of this excitation, highly energetic electrons and a strong electromagnetic field are generated. These effects lead to an amplification of light scattering, absorption, and extinction of specific wavelengths, contingent on the nature of the employed nanoparticle. Thus, microalgae might benefit from this biotechnology as it enables the selective filtration of inhibitory wavelengths and harnesses the electromagnetic fields produced, which could lead to enhancements in both biomass and metabolite productivity. This study aimed at implementing and evaluating a “plasmon-enhanced PBR”. The goal was to utilize LSPR thin films to enhance the growth and CO2 bio-fixation rate of Chlorella vulgaris. The internal/external walls of the PBRs were coated with a TiO2 matrix containing different nanoparticles (Au, Ag, and Au-Ag) in order to evaluate the impact of this approach on microalgae’s performance. Plasmonic films with distinct compositions resulted in different Chlorella vulgaris growth, ranging from 4.85 to 6.13 g.L-1. The highest cell concentrations were obtained with the metallic Ag films, demonstrating a 14% increase compared to the control condition. Moreover, it appeared to be no differences in growth between PBRs with inner and outer wall coatings. In terms of CO2 bio-fixation, distinct rates were obtained depending on the coating applied, ranging from 0.42 to 0.53 gCO2L-1d-1. Ag coating was demonstrated to be the most effective condition for carbon fixation by C. vulgaris. The impact of LSPR films on the biochemical characteristics of biomass (e.g., proteins, lipids, pigments) was analysed as well. Interestingly, Au coating yielded the most significant enhancements in protein content and total pigments, with increments of 15 % and 173 %, respectively, when compared to the PBR without any coating (control condition). Overall, the incorporation of plasmonic films in PBRs seems to have the potential to improve the performance and efficiency of microalgae cultivation, thereby representing an interesting approach to increase both biomass production and GHGs bio-mitigation.

Keywords: CO₂ bio-fixation, plasmonic effect, photobioreactor, photosynthetic microalgae

Procedia PDF Downloads 63
4698 Status Report of the GERDA Phase II Startup

Authors: Valerio D’Andrea

Abstract:

The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.

Keywords: gerda, double beta decay, LNGS, germanium

Procedia PDF Downloads 357
4697 Application of Machine Learning Models to Predict Couchsurfers on Free Homestay Platform Couchsurfing

Authors: Yuanxiang Miao

Abstract:

Couchsurfing is a free homestay and social networking service accessible via the website and mobile app. Couchsurfers can directly request free accommodations from others and receive offers from each other. However, it is typically difficult for people to make a decision that accepts or declines a request when they receive it from Couchsurfers because they do not know each other at all. People are expected to meet up with some Couchsurfers who are kind, generous, and interesting while it is unavoidable to meet up with someone unfriendly. This paper utilized classification algorithms of Machine Learning to help people to find out the Good Couchsurfers and Not Good Couchsurfers on the Couchsurfing website. By knowing the prior experience, like Couchsurfer’s profiles, the latest references, and other factors, it became possible to recognize what kind of the Couchsurfers, and furthermore, it helps people to make a decision that whether to host the Couchsurfers or not. The value of this research lies in a case study in Kyoto, Japan in where the author has hosted 54 Couchsurfers, and the author collected relevant data from the 54 Couchsurfers, finally build a model based on classification algorithms for people to predict Couchsurfers. Lastly, the author offered some feasible suggestions for future research.

Keywords: Couchsurfing, Couchsurfers prediction, classification algorithm, hospitality tourism platform, hospitality sciences, machine learning

Procedia PDF Downloads 108
4696 Determination of Safety Distance Around Gas Pipelines Using Numerical Methods

Authors: Omid Adibi, Nategheh Najafpour, Bijan Farhanieh, Hossein Afshin

Abstract:

Energy transmission pipelines are one of the most vital parts of each country which several strict laws have been conducted to enhance the safety of these lines and their vicinity. One of these laws is the safety distance around high pressure gas pipelines. Safety distance refers to the minimum distance from the pipeline where people and equipment do not confront with serious damages. In the present study, safety distance around high pressure gas transmission pipelines were determined by using numerical methods. For this purpose, gas leakages from cracked pipeline and created jet fires were simulated as continuous ignition, three dimensional, unsteady and turbulent cases. Numerical simulations were based on finite volume method and turbulence of flow was considered using k-ω SST model. Also, the combustion of natural gas and air mixture was applied using the eddy dissipation method. The results show that, due to the high pressure difference between pipeline and environment, flow chocks in the cracked area and velocity of the exhausted gas reaches to sound speed. Also, analysis of the incident radiation results shows that safety distances around 42 inches high pressure natural gas pipeline based on 5 and 15 kW/m2 criteria are 205 and 272 meters, respectively.

Keywords: gas pipelines, incident radiation, numerical simulation, safety distance

Procedia PDF Downloads 321
4695 Packet Fragmentation Caused by Encryption and Using It as a Security Method

Authors: Said Rabah Azzam, Andrew Graham

Abstract:

Fragmentation of packets caused by encryption applied on the network layer of the IOS model in Internet Protocol version 4 (IPv4) networks as well as the possibility of using fragmentation and Access Control Lists (ACLs) as a method of restricting network access to certain hosts or areas of a network.Using default settings, fragmentation is expected to occur and each fragment to be reassembled at the other end. If this does not occur then a high number of ICMP messages should be generated back towards the source host indicating that the packet is too large and that it needs to be made smaller. This result is also expected when the MTU is changed for certain links between devices.When using ACLs and packet fragments to restrict access to hosts or network segments it is possible that ACLs cannot be set up in this way. If ACLs cannot be setup to allow only fragments then it is a limitation of the hardware’s firmware holding back this particular method. If the ACL on the restricted switch can be set up in such a way to allow only fragments then a connection that forces packets to fragment should be allowed to pass through the ACL. This should then make a network connection to the destination machine allowing data to be sent to and from the destination machine. ICMP messages from the restricted access switch and host should also be blocked from being sent back across the link which will be shown in an SSH session into the switch.

Keywords: fragmentation, encryption, security, switch

Procedia PDF Downloads 313
4694 Influence of the Compression Force and Powder Particle Size on Some Physical Properties of Date (Phoenix dactylifera) Tablets

Authors: Djemaa Megdoud, Messaoud Boudaa, Fatima Ouamrane, Salem Benamara

Abstract:

In recent years, the compression of date (Phoenix dactylifera L.) fruit powders (DP) to obtain date tablets (DT) has been suggested as a promising form of valorization of non commercial valuable date fruit (DF) varieties. To further improve and characterize DT, the present study aims to investigate the influence of the DP particle size and compression force on some physical properties of DT. The results show that independently of particle size, the hardness (y) of tablets increases with the increase of the compression force (x) following a logarithmic law (y = a ln (bx) where a and b are the constants of model). Further, a full factorial design (FFD) at two levels, applied to investigate the erosion %, reveals that the effects of time and particle size are the same in absolute value and they are beyond the effect of the compression. Regarding the disintegration time, the obtained results also by means of a FFD show that the effect of the compression force exceeds 4 times that of the DP particle size. As final stage, the color parameters in the CIELab system of DT immediately after their obtaining are differently influenced by the size of the initial powder.

Keywords: powder, tablets, date (Phoenix dactylifera L.), hardness, erosion, disintegration time, color

Procedia PDF Downloads 413
4693 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: dexel, process stability, material removal, milling

Procedia PDF Downloads 508
4692 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation

Authors: Simiao Ren, En Wei

Abstract:

Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.

Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN

Procedia PDF Downloads 74
4691 Earnings Management and Firm’s Creditworthiness

Authors: Maria A. Murtiati, Ancella A. Hermawan

Abstract:

The objective of this study is to examine whether the firm’s eligibility to get a bank loan is influenced by earnings management. The earnings management is distinguished between accruals and real earnings management. Hypothesis testing is carried out with logistic regression model using sample of 285 companies listed at Indonesian Stock Exchange in 2010. The result provides evidence that a greater magnitude in accruals earnings management increases the firm’s probability to be eligible to get bank loan. In contrast, real earnings management through abnormal cash flow and abnormal discretionary expenses decrease firm’s probability to be eligible to get bank loan, while real management through abnormal production cost increases such probability. The result of this study suggests that if the earnings management is assumed to be opportunistic purpose, the accruals based earnings management can distort the banks credit analysis using financial statements. Real earnings management has more impact on the cash flows, and banks are very concerned on the firm’s cash flow ability. Therefore, this study indicates that banks are more able to detect real earnings management, except abnormal production cost in real earning management.

Keywords: discretionary accruals, real earning management, bank loan, credit worthiness

Procedia PDF Downloads 334
4690 The Development of Monk’s Food Bowl Production on Occupational Health Safety and Environment at Work for the Strength of Rattanakosin Local Wisdom

Authors: Thammarak Srimarut, Witthaya Mekhum

Abstract:

This study analysed and developed a model for monk’s food bowl production on occupational health safety and environment at work for the encouragement of Rattanakosin local wisdom at Banbart Community. The process of blowpipe welding was necessary to produce the bowl which was very dangerous or 93.59% risk. After the employment of new sitting posture, the work risk was lower 48.41% or moderate risk. When considering in details, it was found that: 1) the traditional sitting posture could create work risk at 88.89% while the new sitting posture could create the work risk at 58.86%. 2) About the environmental pollution, with the traditional sitting posture, workers exposed to the polluted fume from welding at 61.11% while with the new sitting posture workers exposed to the polluted fume from welding at 40.47%. 3) On accidental risk, with the traditional sitting posture, workers exposed to the accident from welding at 94.44% while with the new sitting posture workers exposed to the accident from welding at 62.54%.

Keywords: occupational health safety, environment at work, Monk’s food bowl, machine intelligence

Procedia PDF Downloads 424
4689 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 256
4688 A Study of the Adaptive Reuse for School Land Use Strategy: An Application of the Analytic Network Process and Big Data

Authors: Wann-Ming Wey

Abstract:

In today's popularity and progress of information technology, the big data set and its analysis are no longer a major conundrum. Now, we could not only use the relevant big data to analysis and emulate the possible status of urban development in the near future, but also provide more comprehensive and reasonable policy implementation basis for government units or decision-makers via the analysis and emulation results as mentioned above. In this research, we set Taipei City as the research scope, and use the relevant big data variables (e.g., population, facility utilization and related social policy ratings) and Analytic Network Process (ANP) approach to implement in-depth research and discussion for the possible reduction of land use in primary and secondary schools of Taipei City. In addition to enhance the prosperous urban activities for the urban public facility utilization, the final results of this research could help improve the efficiency of urban land use in the future. Furthermore, the assessment model and research framework established in this research also provide a good reference for schools or other public facilities land use and adaptive reuse strategies in the future.

Keywords: adaptive reuse, analytic network process, big data, land use strategy

Procedia PDF Downloads 191