Search results for: base flow index
649 Sensory Ethnography and Interaction Design in Immersive Higher Education
Authors: Anna-Kaisa Sjolund
Abstract:
The doctoral thesis examines interaction design and sensory ethnography as tools to create immersive education environments. In recent years, there has been increasing interest and discussions among researchers and educators on immersive education like augmented reality tools, virtual glasses and the possibilities to utilize them in education at all levels. Using virtual devices as learning environments it is possible to create multisensory learning environments. Sensory ethnography in this study refers to the way of the senses consider the impact on the information dynamics in immersive learning environments. The past decade has seen the rapid development of virtual world research and virtual ethnography. Christine Hine's Virtual Ethnography offers an anthropological explanation of net behavior and communication change. Despite her groundbreaking work, time has changed the users’ communication style and brought new solutions to do ethnographical research. The virtual reality with all its new potential has come to the fore and considering all the senses. Movie and image have played an important role in cultural research for centuries, only the focus has changed in different times and in a different field of research. According to Karin Becker, the role of image in our society is information flow and she found two meanings what the research of visual culture is. The images and pictures are the artifacts of visual culture. Images can be viewed as a symbolic language that allows digital storytelling. Combining the sense of sight, but also the other senses, such as hear, touch, taste, smell, balance, the use of a virtual learning environment offers students a way to more easily absorb large amounts of information. It offers also for teachers’ different ways to produce study material. In this article using sensory ethnography as research tool approaches the core question. Sensory ethnography is used to describe information dynamics in immersive environment through interaction design. Immersive education environment is understood as three-dimensional, interactive learning environment, where the audiovisual aspects are central, but all senses can be taken into consideration. When designing learning environments or any digital service, interaction design is always needed. The question what is interaction design is justified, because there is no simple or consistent idea of what is the interaction design or how it can be used as a research method or whether it is only a description of practical actions. When discussing immersive learning environments or their construction, consideration should be given to interaction design and sensory ethnography.Keywords: immersive education, sensory ethnography, interaction design, information dynamics
Procedia PDF Downloads 138648 Antiangiogenic and Pro-Apoptotic Properties of Shemamruthaa: An Herbal Preparation in Experimental Mammary Carcinoma-Bearing Rats and Breast Cancer Cell Line In vitro
Authors: Nandhakumar Elumalai, Purushothaman Ayyakannu, Sachidanandam T. Panchanatham
Abstract:
Background: Understanding the basic mechanisms and factors underlying the tumor growth and invasion has gained attention in recent times. The processes of angiogenesis and apoptosis are known to play a vital role in various stages of cancer. The vascular endothelial growth factor (VEGF) is well established as one of the key regulators of tumor angiogenesis while MMPs are known for their exclusive ability to degrade ECM. Objective: The present study was designed to evaluate the pro apoptotic and anti angiogenic activity of the herbal formulation Shemamruthaa. The anticancer activity of Shemamruthaa was tested in breast cancer cell line (MCF-7). Results of MTT, trypan blue and flow cytometric analysis of apoptotis suggested that Shemamruthaa can induce cytotoxicity in cancer cells, in a concentration- and time dependent manner and induce apoptosis. With these results, we further evaluated the antiangiogenic and pro-apoptotic activities of Shemamruthaa in DMBA induced mammary carcinoma in Sprague Dawley rats. Flavono tumour was induced in 8-week-old Sprague-Dawley rats by gastric intubation of 25 mg DMBA in 1ml olive oil. After 90 days of induction period, the rats were orally administered with Shemamruthaa (400 mg/kg body wt) for 45 days. Treatment with the drug SM significantly modulated the expression of p53, MMP-2, MMP-3, MMP-9 and VEGF by means of its anti angiogenic and protease inhibiting activity. Conclusion: Based on these results, it might be concluded that the formulation, Shemamruthaa, constituted of dried flowers of Hibiscus rosa-sinensis, fruits of Emblica officinalis, and honey has been found to exhibit pronounced antiproliferative and apoptotic effects. This enhanced anticancer effect of Shemamruthaa might be attributed to the synergistic action of polyphenols such as flavonoids, tannins, alkaloids, glycosides, saponins, steroids, terpenoids, vitamin C, niacin, pyrogallol, hydroxymethylfurfural, trilinolein, and other compounds present in the formulation. Collectively, these results demonstrate that Shemamruthaa holds potential to be developed as a potent chemotherapeutic agent against mammary carcinoma.Keywords: Shemamruthaa, flavonoids, MCF-7 cell line, mammary cancer
Procedia PDF Downloads 252647 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 120646 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator
Authors: Gómez R. Marta, Martín M. Jesús María
Abstract:
The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.Keywords: atmospheric dispersion, dioxin, furan, incinerator
Procedia PDF Downloads 217645 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 177644 Shared Versus Pooled Automated Vehicles: Exploring Behavioral Intentions Towards On-Demand Automated Vehicles
Authors: Samira Hamiditehrani
Abstract:
Automated vehicles (AVs) are emerging technologies that could potentially offer a wide range of opportunities and challenges for the transportation sector. The advent of AV technology has also resulted in new business models in shared mobility services where many ride hailing and car sharing companies are developing on-demand AVs including shared automated vehicles (SAVs) and pooled automated vehicles (Pooled AVs). SAVs and Pooled AVs could provide alternative shared mobility services which encourage sustainable transport systems, mitigate traffic congestion, and reduce automobile dependency. However, the success of on-demand AVs in addressing major transportation policy issues depends on whether and how the public adopts them as regular travel modes. To identify conditions under which individuals may adopt on-demand AVs, previous studies have applied human behavior and technology acceptance theories, where Theory of Planned Behavior (TPB) has been validated and is among the most tested in on-demand AV research. In this respect, this study has three objectives: (a) to propose and validate a theoretical model for behavioral intention to use SAVs and Pooled AVs by extending the original TPB model; (b) to identify the characteristics of early adopters of SAVs, who prefer to have a shorter and private ride, versus prospective users of Pooled AVs, who choose more affordable but longer and shared trips; and (c) to investigate Canadians’ intentions to adopt on-demand AVs for regular trips. Toward this end, this study uses data from an online survey (n = 3,622) of workers or adult students (18 to 75 years old) conducted in October and November 2021 for six major Canadian metropolitan areas: Toronto, Vancouver, Ottawa, Montreal, Calgary, and Hamilton. To accomplish the goals of this study, a base bivariate ordered probit model, in which both SAV and Pooled AV adoptions are estimated as ordered dependent variables, alongside a full structural equation modeling (SEM) system are estimated. The findings of this study indicate that affective motivations such as attitude towards AV technology, perceived privacy, and subjective norms, matter more than sociodemographic and travel behavior characteristic in adopting on-demand AVs. Also, the results of second objective provide evidence that although there are a few affective motivations, such as subjective norms and having ample knowledge, that are common between early adopters of SAVs and PooledAVs, many examined motivations differ among SAV and Pooled AV adoption factors. In other words, motivations influencing intention to use on-demand AVs differ among the service types. Likewise, depending on the types of on-demand AVs, the sociodemographic characteristics of early adopters differ significantly. In general, findings paint a complex picture with respect to the application of constructs from common technology adoption models to the study of on-demand AVs. Findings from the final objective suggest that policymakers, planners, the vehicle and technology industries, and the public at large should moderate their expectations that on-demand AVs may suddenly transform the entire transportation sector. Instead, this study suggests that SAVs and Pooled AVs (when they entire the Canadian market) are likely to be adopted as supplementary mobility tools rather than substitutions for current travel modesKeywords: automated vehicles, Canadian perception, theory of planned behavior, on-demand AVs
Procedia PDF Downloads 74643 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models
Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal
Abstract:
Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics
Procedia PDF Downloads 64642 Triple Case Phantom Tumor of Lungs
Authors: Angelis P. Barlampas
Abstract:
Introduction: The term phantom lung mass describes the ovoid collection of fluid within the interlobular fissure, which initially creates the impression of a mass. The problem of correct differential diagnosis is great, especially in plain radiography. A case is presented with three nodular pulmonary foci, the shape, location, and density of which, as well as the presence of chronic loculated pleural effusions, suggest the presence of multiple phantom tumors of the lung. Purpose: The aim of this paper is to draw the attention of non-experienced and non-specialized physicians to the existence of benign findings that mimic pathological conditions and vice versa. The careful study of a radiological examination and the comparison with previous exams or further control protect against quick wrong conclusions. Methods: A hospitalized patient underwent a non-contrast CT scan of the chest as part of the general control of her situation. Results: Computed tomography revealed pleural effusions, some of them loculated, increased cardiothoracic index, as well as the presence of three nodular foci, one in the left lung and two in the right with a maximum density of up to 18 Hounsfield units and a mean diameter of approximately five centimeters. Two of them are located in the characteristical anatomical position of the major interlobular fissure. The third one is located in the area of the right lower lobe’s posterior basal part, and it presents the same characteristics as the previous ones and is likely to be a loculated fluid collection, within an auxiliary interlobular fissure or a cyst, in the context of the patient's more general pleural entrapments and loculations. The differential diagnosis of nodular foci based on their imaging characteristics includes the following: a) rare metastatic foci with low density (liposarcoma, mucous tumors of the digestive or genital system, necrotic metastatic foci, metastatic renal cancer, etc.), b) necrotic multiple primary lung tumor locations (squamous epithelial cancer, etc. ), c) hamartomas of the lung, d) fibrotic tumors of the interlobular fissures, e) lipoid pneumonia, f) fluid concentrations within the interlobular fissures, g) lipoma of the lung, h) myelolipomas of the lung. Conclusions: The collection of fluid within the interlobular fissure of the lung can give the false impression of a lung mass, particularly on plain chest radiography. In the case of computed tomography, the ability to measure the density of a lesion, combined with the provided high anatomical details of the location and characteristics of the lesion, can lead relatively easily to the correct diagnosis. In cases of doubt or image artifacts, comparison with previous or subsequent examinations can resolve any disagreements, while in rare cases, intravenous contrast may be necessary.Keywords: phantom mass, chest CT, pleural effusion, cancer
Procedia PDF Downloads 55641 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 89640 Exploring the Relationship between Mediolateral Center of Pressure and Galvanic Skin Response during Balance Tasks
Authors: Karlee J. Hall, Mark Laylor, Jessy Varghese, Paula Polastri, Karen Van Ooteghem, William McIlroy
Abstract:
Balance training is a common part of physiotherapy treatment and often involves a set of proprioceptive exercises which the patient carries out in the clinic and as part of their exercise program. Understanding all contributing factors to altered balance is of utmost importance to the clinical success of treatment of balance dysfunctions. A critical role for the autonomic nervous system (ANS) in the control of balance reactions has been proposed previously, with evidence for potential involvement being inferred from the observation of phasic galvanic skin responses (GSR) evoked by external balance perturbations. The current study explored whether the coupling between ANS reactivity and balance reactions would be observed during spontaneously occurring instability while standing, including standard positions typical of physiotherapy balance assessments. It was hypothesized that time-varying changes in GSR (ANS reactivity) would be associated with time-varying changes in the mediolateral center of pressure (ML-COP) (somatomotor reactivity). Nine individuals (5 females, 4 males, aged 19-37 years) were recruited. To induce varying balance demands during standing, the study compared ML-COP and GSR data across different task conditions varying the availability of vision and width of the base of support. Subjects completed 3, 30-second trials for each of the following stance conditions: standard, narrow, and tandem eyes closed, tandem eyes open, tandem eyes open with dome to shield visual input, and restricted peripheral visual field. ANS activity was evaluated by measures of GSR recorded from Ag-AgCl electrodes on the middle phalanges of digits 2 and 4 on the left hand; balance measures include ML-COP excursion frequency and amplitude recorded from two force plates embedded in the floor underneath each foot. Subjects were instructed to stand as still as possible with arms crossed in front of their chest. When comparing mean task differences across subjects, there was an expected increase in postural sway from tasks with a wide stance and no sensory restrictions (least challenging) to those with a narrow stance and no vision (most challenging). The correlation analysis revealed a significant positive relationship between ML-COP variability and GSR variability when comparing across tasks (r=0.94, df=5, p < 0.05). In addition, correlations coincided within each subject and revealed a significant positive correlation in 7 participants (r= 0.47, 0.57, 0.62, 0.62, 0.81, 0.64, 0.69 respectively, df=19, p < 0.05) and no significant relationship in 2 participants (r=0.36, 0.29 respectively, df=19, p > 0.05). The current study revealed a significant relationship between ML-COP and GSR during balance tasks, revealing the ANS reactivity associated with naturally occurring instability when standing still, which is proportional to the degree of instability. Understanding the link between ANS activity and control of COP is an important step forward in the enhancement of assessment of contributing factors to poor balance and treatment of balance dysfunctions. The next steps will explore the temporal association between the time-varying changes in COP and GSR to establish if the ANS reactivity phase leads or lags the evoked motor reactions, as well as exploration of potential biomarkers for use in screening of ANS activity as a contributing factor to altered balance control clinically.Keywords: autonomic nervous system, balance control, center of pressure, somatic nervous system
Procedia PDF Downloads 168639 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design
Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi
Abstract:
Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect
Procedia PDF Downloads 109638 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods
Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke
Abstract:
Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing
Procedia PDF Downloads 130637 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies
Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid
Abstract:
Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.Keywords: climate, renewable energy, R strategies, sustainability
Procedia PDF Downloads 137636 Marketing in the Fashion Industry and Its Critical Success Factors: The Case of Fashion Dealers in Ghana
Authors: Kumalbeo Paul Kamani
Abstract:
Marketing plays a very important role in the success of any firm since it represents the means through which a firm can reach its customers and also promotes its products and services. In fact, marketing aids the firm in identifying customers who the business can competitively serve, and tailoring product offerings, prices, distribution, promotional efforts, and services towards those customers. Unfortunately, in many firms, marketing has been reduced to merely advertisement. For effective marketing, firms must go beyond this often-limited function of advertisement. In the fashion industry in particular, marketing faces challenges due to its peculiar characteristics. Previous research for instance affirms the idiosyncrasy and peculiarities that differentiate the fashion industry from other industrial areas. It has been documented that the fashion industry is characterized seasonal intensity, short product life cycles, the difficulty of competitive differentiation, and long time for companies to reach financial stability. These factors are noted to pose obstacles to the fashion entrepreneur’s endeavours and can be the reasons that explain their low survival rates. In recent times, the fashion industry has been described as a market that is accessible market, has low entry barriers, both in terms of needed capital and skills which have all accounted for the burgeoning nature of startups. Yet as already stated, marketing is particularly challenging in the industry. In particular, areas such as marketing, branding, growth, project planning, financial and relationship management might represent challenges for the fashion entrepreneur but that have not been properly addressed by previous research. It is therefore important to assess marketing strategies of fashion firms and the factors influencing their success. This study generally sought to examine marketing strategies of fashion dealers in Ghana and their critical success factors. The study employed the quantitative survey research approach. A total of 120 fashion dealers were sampled. Questionnaires were used as instrument of data collection. Data collected was analysed using quantitative techniques including descriptive statistics and Relative Importance Index. The study revealed that the marketing strategies used by fashion apparels are text messages using mobile phones, referrals, social media marketing, and direct marketing. Results again show that the factors influencing fashion marketing effectiveness are strategic management, marketing mix (product, price, promotion etc), branding and business development. Policy implications are finally outlined. The study recommends among others that there is a need for the top management executive to craft and adopt marketing strategies that enable that are compatible with the fashion trends and the needs of the customers. This will improve customer satisfaction and hence boost market penetration. The study further recommends that the fashion industry in Ghana should seek to ensure that fashion apparels accommodate the diversity and the cultural setting of different customers to meet their unique needs.Keywords: marketing, fashion, industry, success factors
Procedia PDF Downloads 45635 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 343634 Adaptability in Older People: A Mixed Methods Approach
Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger
Abstract:
Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.Keywords: adaptability, intervention study, mixed methods, nursing home residents
Procedia PDF Downloads 149633 Utilising Indigenous Knowledge to Design Dykes in Malawi
Authors: Martin Kleynhans, Margot Soler, Gavin Quibell
Abstract:
Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi
Procedia PDF Downloads 283632 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin
Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh
Abstract:
Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow
Procedia PDF Downloads 216631 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 114630 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic
Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis
Abstract:
Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.Keywords: autophagy, endocytosis, glioma, WNK2
Procedia PDF Downloads 370629 Living in the Edge: Crisis in Indian Tea Industry and Social Deprivation of Tea Garden Workers in Dooars Region of India
Authors: Saraswati Kerketta
Abstract:
Tea industry is one of the oldest organised sector of India. It employs roughly 1.5 million people directly. Since the last decade Indian tea industry, especially in the northern region is experiencing worst crisis in the post-independence period. Due to many reason the prices of tea show steady decline. The workers are paid one of the lowest wage in tea industry in the world (1.5$ a day) below the UN's $2 a day for extreme poverty. The workers rely on addition benefits from plantation which includes food, housing and medical facilities. These have been effective means of enslavement of generations of labourers by the owners. There is hardly any change in the tea estates where the owners determine the fate of workers. When the tea garden is abandoned or is closed all the facilities disappear immediately. The workers are the descendants of tribes from central India also known as 'tea tribes'. Alienated from their native place, the geographical and social isolation compounded their vulnerability of these people. The economy of the region being totally dependent on tea has resulted in absolute unemployment for the workers of these tea gardens. With no other livelihood and no land to grow food, thousands of workers faced hunger and starvation. The Plantation Labour Act which ensures the decent working and living condition is violated continuously. The labours are forced to migrate and are also exposed to the risk of human trafficking. Those who are left behind suffers from starvation, malnutrition and disease. The condition in the sick tea plantation is no better. Wage are not paid regularly, subsidised food, fuel are also not supplied properly. Health care facilities are in very bad shape. Objectives: • To study the socio-cultural and demographic characteristics of the tea garden labourers in the study area. • To examine the social situation of workers in sick estates in dooars region. • To assess the magnitude of deprivation the impact of economic crisis on abandoned and closed tea estates in the region. Data Base: The study is based on data collected from field survey. Methods: Quantative: Cross-Tabulation, Regression analysis. Qualitative: Household Survey, Focussed Group Discussion, In-depth interview of key informants. Findings: Purchasing power parity has declined since in last three decades. There has been many fold increase in migration. Males migrates long distance towards central and west and south India. Females and children migrates both long and short distance. No one has reported to migrate back to the place of origin of their ancestors. Migrant males work mostly as construction labourers and as factory workers whereas females and children work as domestic help and construction labourers. In about 37 cases either they haven't contacted their families in last six months or are not traceable. The families with single earning members are more likely to migrate. Burden of disease and the duration of sickness, abandonment and closure of plantation are closely related. Death tolls are likely to rise 1.5 times in sick tea gardens and three times in closed tea estates. Sixty percent of the people are malnourished in the sick tea gardens and more than eighty five per cent in abandoned and sick tea gardens.Keywords: migration, trafficking, starvation death, tea garden workers
Procedia PDF Downloads 387628 Challenges of Carbon Trading Schemes in Africa
Authors: Bengan Simbarashe Manwere
Abstract:
The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa
Procedia PDF Downloads 72627 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs
Authors: S. Lertsakulpasuk, S. Athichanagorn
Abstract:
As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs
Procedia PDF Downloads 445626 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder
Authors: Kseniya Gladun
Abstract:
Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography
Procedia PDF Downloads 253625 Frailty and Quality of Life among Older Adults: A Study of Six LMICs Using SAGE Data
Authors: Mamta Jat
Abstract:
Background: The increased longevity has resulted in the increase in the percentage of the global population aged 60 years or over. With this “demographic transition” towards ageing, “epidemiologic transition” is also taking place characterised by growing share of non-communicable diseases in the overall disease burden. So, many of the older adults are ageing with chronic disease and high levels of frailty which often results in lower levels of quality of life. Although frailty may be increasingly common in older adults, prevention or, at least, delay the onset of late-life adverse health outcomes and disability is necessary to maintain the health and functional status of the ageing population. This is an effort using SAGE data to assess levels of frailty and its socio-demographic correlates and its relation with quality of life in LMICs of India, China, Ghana, Mexico, Russia and South Africa in a comparative perspective. Methods: The data comes from multi-country Study on Global AGEing and Adult Health (SAGE), consists of nationally representative samples of older adults in six low and middle-income countries (LMICs): China, Ghana, India, Mexico, the Russian Federation and South Africa. For our study purpose, we will consider only 50+ year’s respondents. The logistic regression model has been used to assess the correlates of frailty. Multinomial logistic regression has been used to study the effect of frailty on QOL (quality of life), controlling for the effect of socio-economic and demographic correlates. Results: Among all the countries India is having highest mean frailty in males (0.22) and females (0.26) and China with the lowest mean frailty in males (0.12) and females (0.14). The odds of being frail are more likely with the increase in age across all the countries. In India, China and Russia the chances of frailty are more among rural older adults; whereas, in Ghana, South Africa and Mexico rural residence is protecting against frailty. Among all countries china has high percentage (71.46) of frail people in low QOL; whereas Mexico has lowest percentage (36.13) of frail people in low QOL.s The risk of having low and middle QOL is significantly (p<0.001) higher among frail elderly as compared to non–frail elderly across all countries with controlling socio-demographic correlates. Conclusion: Women and older age groups are having higher frailty levels than men and younger aged adults in LMICs. The mean frailty scores demonstrated a strong inverse relationship with education and income gradients, while lower levels of education and wealth are showing higher levels of frailty. These patterns are consistent across all LMICs. These data support a significant role of frailty with all other influences controlled, in having low QOL as measured by WHOQOL index. Future research needs to be built on this evolving concept of frailty in an effort to improve quality of life for frail elderly population, in LMICs setting.Keywords: Keywords: Ageing, elderly, frailty, quality of life
Procedia PDF Downloads 289624 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 30623 Achieving Household Electricity Saving Potential Through Behavioral Change
Authors: Lusi Susanti, Prima Fithri
Abstract:
The rapid growth of Indonesia population is directly proportional to the energy needs of the country, but not all of Indonesian population can relish the electricity. Indonesia's electrification ratio is still around 80.1%, which means that approximately 19.9% of households in Indonesia have not been getting the flow of electrical energy. Household electricity consumptions in Indonesia are generally still dominated by the public urban. In the city of Padang, West Sumatera, Indonesia, about 94.10% are power users of government services (PLN). The most important thing of the issue is human resources efficient energy. User behavior in utilizing electricity becomes significant. However repair solution will impact the user's habits sustainable energy issues. This study attempts to identify the user behavior and lifestyle that affect household electricity consumption and to evaluate the potential for energy saving. The behavior component is frequently underestimated or ignored in analyses of household electrical energy end use, partly because of its complexity. It is influenced by socio-demographic factors, culture, attitudes, aesthetic norms and comfort, as well as social and economic variables. Intensive questioner survey, in-depth interview and statistical analysis are carried out to collect scientific evidences of the behavioral based changes instruments to reduce electricity consumption in household sector. The questioner was developed to include five factors assuming affect the electricity consumption pattern in household sector. They are: attitude, energy price, household income, knowledge and other determinants. The survey was carried out in Padang, West Sumatra Province Indonesia. About 210 questioner papers were proportionally distributed to households in 11 districts in Padang. Stratified sampling was used as a method to select respondents. The results show that the household size, income, payment methods and size of house are factors affecting electricity saving behavior in residential sector. Household expenses on electricity are strongly influenced by gender, type of job, level of education, size of house, income, payment method and level of installed power. These results provide a scientific evidence for stakeholders on the potential of controlling electricity consumption and designing energy policy by government in residential sector.Keywords: electricity, energy saving, household, behavior, policy
Procedia PDF Downloads 440622 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 145621 The Anesthesia Considerations in Robotic Mastectomies
Authors: Amrit Vasdev, Edwin Rho, Gurinder Vasdev
Abstract:
Robotic surgery has enabled a new spectrum of minimally invasive breast reconstruction by improving visualization, surgeon posturing, and improved patient outcomes.1 The DaVinci robot system can be utilized in nipple sparing mastectomies and reconstructions. The process involves the insufflation of the subglandular space and a dissection of the mammary gland with a combination of cautery and blunt dissection. This case outlines a 35-year-old woman who has a long-standing family history of breast cancer and a diagnosis of a deleterious BRCA2 genetic mutation. She has decided to proceed with bilateral nipple sparing mastectomies with implants. Her perioperative mammogram and MRI were negative for masses, however, her left internal mammary lymph node was enlarged. She has taken oral contraceptive pills for 3-5 years and denies DES exposure, radiation therapy, human replacement therapy, or prior breast surgery. She does not smoke and rarely consumes alcohol. During the procedure, the patient received a standardized anesthetic for out-patient surgery of propofol infusion, succinylcholine, sevoflurane, and fentanyl. Aprepitant was given as an antiemetic and preoperative Tylenol and gabapentin for pain management. Concerns for the patient during the procedure included CO2 insufflation into the subcutaneous space. With CO2 insufflation, there is a potential for rapid uptake leading to severe acidosis, embolism, and subcutaneous emphysema.2To mitigate this, it is important to hyperventilate the patient and reduce both the insufflation pressure and the CO2 flow rate to the minimal acceptable by the surgeon. For intraoperative monitoring during this 6-9 hour long procedure, it has been suggested to utilize an Arterial-Line for end-tidal CO2 monitoring. However, in this case, it was not necessary as the patient had excellent cardiovascular reserve, and end-tidal CO2 was within normal limits for the duration of the procedure. A BIS monitor was also utilized to reduce anesthesia burden and to facilitate a prompt discharge from the PACU. Minimal Invasive Robotic Surgery will continue to evolve, and anesthesiologists need to be prepared for the new challenges ahead. Based on our limit number of patients, robotic mastectomy appears to be a safe alternative to open surgery with the promise of clearer tissue demarcation and better cosmetic results.Keywords: anesthesia, mastectomies, robotic, hypercarbia
Procedia PDF Downloads 113620 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 230