Search results for: autoregressive integrate moving average model selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22988

Search results for: autoregressive integrate moving average model selection

10628 Approaching In vivo Dosimetry for Kilovoltage X-Ray Radiotherapy

Authors: Rodolfo Alfonso, David Alonso, Albin Garcia, Jose Luis Alonso

Abstract:

Recently a new kilovoltage radiotherapy unit model Xstrahl 200 - donated to the INOR´s Department of Radiotherapy (DR-INOR) in the framework of a IAEA's technical cooperation project- has been commissioned. This unit is able to treat shallow and low deep laying lesions, as it provides 8 discrete beam qualities, from 40 to 200 kV. As part of the patient-specific quality assurance program established at DR-INOR for external beam radiotherapy, it has been recommended to implement in vivo dose measurements (IVD), as they allow effectively discovering eventual errors or failures in the radiotherapy process. For that purpose a radio-photoluminescence (RPL) dosimetry system, model XXX, -also donated to DR-INOR by the same IAEA project- has been studied and commissioned. Main dosimetric parameters of the RPL system, such as reproducibility, linearity, and filed size influence were assessed. In a similar way, the response of radiochromic EBT3 type film was investigated for purposes of IVD. Both systems were calibrated in terms of entrance surface dose. Results of the dosimetric commissioning of RPL and EBT3 for IVD, and their pre-clinical implementation through end-to-end test cases are presented. The RPL dosimetry seems more recommendable for hyper-fractionated schemes with larger fields and curved patient contours, as those in chest wall irradiations, where the use of more than one dosimeter could be required. The radiochromic system involves smaller corrections with field size, but it sensibility is lower; hence it is more adequate for hypo-fractionated treatments with smaller fields.

Keywords: glass dosimetry, in vivo dosimetry, kilovotage radiotherapy, radiochromic dosimetry

Procedia PDF Downloads 382
10627 Co-Alignment of Comfort and Energy Saving Objectives for U.S. Office Buildings and Restaurants

Authors: Lourdes Gutierrez, Eric Williams

Abstract:

Post-occupancy research shows that only 11% of commercial buildings met the ASHRAE thermal comfort standard. Many buildings are too warm in winter and/or too cool in summer, wasting energy and not providing comfort. In this paper, potential energy savings in U.S. offices and restaurants if thermostat settings are calculated according the updated ASHRAE 55-2013 comfort model that accounts for outdoor temperature and clothing choice for different climate zones. eQUEST building models are calibrated to reproduce aggregate energy consumption as reported in the U.S. Commercial Building Energy Consumption Survey. Changes in energy consumption due to the new settings are analyzed for 14 cities in different climate zones and then the results are extrapolated to estimate potential national savings. It is found that, depending on the climate zone, each degree increase in the summer saves 0.6 to 1.0% of total building electricity consumption. Each degree the winter setting is lowered saves 1.2% to 8.7% of total building natural gas consumption. With new thermostat settings, national savings are 2.5% of the total consumed in all office buildings and restaurants, summing up to national savings of 69.6 million GJ annually, comparable to all 2015 total solar PV generation in US. The goals of improved comfort and energy/economic savings are thus co-aligned, raising the importance of thermostat management as an energy efficiency strategy.

Keywords: energy savings quantifications, commercial building stocks, dynamic clothing insulation model, operation-focused interventions, energy management, thermal comfort, thermostat settings

Procedia PDF Downloads 297
10626 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach

Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik

Abstract:

Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.

Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data

Procedia PDF Downloads 333
10625 Belonging in South Africa: Networks among African Immigrants and South African Natives

Authors: Efe Mary Isike

Abstract:

The variety of relationships between migrants and host communities is an enduring theme of migration studies. On one extreme, there are numerous examples of hostility towards ‘strangers’ who are either ejected from society or denied access to jobs, housing, education, healthcare and other aspects of normal life. More moderate treatments of those identified as different include expectations of assimilation in which host communities expect socially marginalized groups to conform to norms that they define. Both exclusion and assimilation attempt to manage the problem of difference by removing it. South Africa experienced great influx of African immigrants who worked in mines and farms under harsh and exploitative conditions before and after the institutionalization of apartheid. Although these labour migrants contributed a great deal to the economic development of South Africa, they were not given citizenship status. The formal democratization in 1994 came with dreams and expectations of a more inclusive South Africa, where black South Africans hoped to maximize their potential in a more free, fair and equal society. In the same vein, it also opened spaces for an influx of especially African immigrants into the country which set the stage for a new form of contest for belonging between South African citizens and African migrant settlers. One major manifestation of this contest was the violent xenophobic attacks against African immigrants which predate that of May 2008 and has continued with lower intensity across the country since then. While it is doubtless possible to find abundant evidence of antagonism in the relations between South Africans and African immigrants, the purpose of this study is to investigate the everyday realities of migrants in ordinary places who interact with a variety of people through their livelihood activities, marriages and social relationships, moving around towns and cities, in their residential areas, in faith-based organizations and other elements of everyday life. Rather than assuming all relations are hostile, this study intends to look at the breadth of everyday relationships within a specific context. Based on the foregoing, the main task of this study is to holistically examine and explain the nature of interactions between African migrants and South African citizens by analysing the social network ties that connect them in the specific case of Umhlathuze municipality. It will also investigate the variety of networks that exists between African migrants and South Africans and examine the nature of the linkages in the various networks identified between these two groups in Umhlathuze Municipality. Apart from a review of relevant literature, policies and other official documents, this paper will employ a purposive sample survey and in-depth interview of African immigrants and South Africans within their networks in selected suburbs in KwaZulu-Natal.

Keywords: migration, networks, development, host communities

Procedia PDF Downloads 265
10624 Managing Early Stakeholder Involvement at the Early Stages of a Building Project Life Cycle

Authors: Theophilus O. Odunlami, Hasan Haroglu, Nader Saleh-Matter

Abstract:

The challenges facing the construction industry are often worsened by the compounded nature of projects coupled with the complexity of key stakeholders involved at different stages of the project. Projects are planned to achieve outlined benefits in line with the business case; however, a lack of effective management of key stakeholders can result in unrealistic delivery aspirations, unnecessary re-works, and overruns. The aim of this study is to examine the early stages of a project lifecycle and investigate the stakeholder management and involvement processes and their impact on the successful delivery of the project. The research engaged with conventional construction organisations and project personnel and stakeholders on diverse projects, using a research strategy to analyse existing project case studies, narrative enquiries, interviews, and surveys using a combined qualitative, quantitative, and mixed method of analysis. Research findings have shown that the involvement of stakeholders at different levels during the early stages has pronounced effects on project delivery; it helps to forge synergy and promotes a clear understanding of individual responsibilities, strengths, and weaknesses. This has often fostered a positive sense of productive collaboration right through the early stages of the project. These research findings intend to contribute to the development of a process framework for stakeholder and project team involvement in the early stages of a project. This framework will align with the selection criteria for stakeholders, contractors, and resources, ultimately contributing to the successful completion of projects. The primary question addressed in this study is stakeholder involvement and management of the early stages of a building project life cycle impacts project delivery. Findings showed that early-stage stakeholder involvement and collaboration between project teams and contractors significantly contribute to project success. However, a strong and healthy communication strategy would be required to maintain the flow of value-added ideas among stakeholders at the early stages to benefit the project at the execution stage.

Keywords: early stages, project lifecycle, stakeholders, decision-making strategy, project framework

Procedia PDF Downloads 87
10623 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 182
10622 Erosion Susceptibility Zoning and Prioritization of Micro-Watersheds: A Remote Sensing-Gis Based Study of Asan River Basin, Western Doon Valley, India

Authors: Pijush Roy, Vinay Kumar Rai

Abstract:

The present study highlights the estimation of soil loss and identification of critical area for implementation of best management practice is central to the success of soil conservation programme. The quantification of morphometric and Universal Soil Loss Equation (USLE) factors using remote sensing and GIS for prioritization of micro-watersheds in Asan River catchment, western Doon valley at foothills of Siwalik ranges in the Dehradun districts of Uttarakhand, India. The watershed has classified as a dendritic pattern with sixth order stream. The area is classified into very high, high, moderately high, medium and low susceptibility zones. High to very high erosion zone exists in the urban area and agricultural land. Average annual soil loss of 64 tons/ha/year has been estimated for the watershed. The optimum management practices proposed for micro-watersheds of Asan River basin are; afforestation, contour bunding suitable sites for water harvesting structure as check dam and soil conservation, agronomical measure and bench terrace.

Keywords: erosion susceptibility zones, morphometric characteristics, prioritization, remote sensing and GIS, universal soil loss equation

Procedia PDF Downloads 290
10621 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model

Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles

Abstract:

The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.

Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite

Procedia PDF Downloads 214
10620 Liesegang Phenomena: Experimental and Simulation Studies

Authors: Vemula Amalakrishna, S. Pushpavanam

Abstract:

Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).

Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation

Procedia PDF Downloads 142
10619 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 59
10618 Modeling Battery Degradation for Electric Buses: Assessment of Lifespan Reduction from In-Depot Charging

Authors: Anaissia Franca, Julian Fernandez, Curran Crawford, Ned Djilali

Abstract:

A methodology to estimate the state-of-charge (SOC) of battery electric buses, including degradation effects, for a given driving cycle is presented to support long-term techno-economic analysis integrating electric buses and charging infrastructure. The degradation mechanisms, characterized by both capacity and power fade with time, have been modeled using an electrochemical model for Li-ion batteries. Iterative changes in the negative electrode film resistance and decrease in available lithium as a function of utilization is simulated for every cycle. The cycles are formulated to follow typical transit bus driving patterns. The power and capacity decay resulting from the degradation model are introduced as inputs to a longitudinal chassis dynamic analysis that calculates the power consumption of the bus for a given driving cycle to find the state-of-charge of the battery as a function of time. The method is applied to an in-depot charging scenario, for which the bus is charged exclusively at the depot, overnight and to its full capacity. This scenario is run both with and without including degradation effects over time to illustrate the significant impact of degradation mechanisms on bus performance when doing feasibility studies for a fleet of electric buses. The impact of battery degradation on battery lifetime is also assessed. The modeling tool can be further used to optimize component sizing and charging locations for electric bus deployment projects.

Keywords: battery electric bus, E-bus, in-depot charging, lithium-ion battery, battery degradation, capacity fade, power fade, electric vehicle, SEI, electrochemical models

Procedia PDF Downloads 309
10617 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 166
10616 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data

Authors: Imran Aziz Tunio

Abstract:

Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.

Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES

Procedia PDF Downloads 79
10615 Foreign Investment, Technological Diffusion and Competiveness of Exports: A Case for Textile Industry in Pakistan

Authors: Syed Toqueer Akhter, Muhammad Awais

Abstract:

Pakistan is a country which is gifted by naturally abundant resources these resources are a pioneer towards a prospect and developed country. Pakistan is the fourth largest exporter of the textile in the world and with the passage of time the competitiveness of these exports is subject to a decline. With a lot of International players in the textile world like China, Bangladesh, India, and Sri Lanka, Pakistan needs to put up a lot of effort to compete with these countries. This research paper would determine the impact of Foreign Direct Investment upon technological diffusion and that how significantly it may be affecting on export performance of the country. It would also demonstrate that with the increase in Foreign Direct Investment, technological diffusion, strong property rights, and using different policy tools, export competitiveness of the country could be improved. The research has been carried out using time series data from 1995 to 2013 and the results have been estimated by using competing Econometrics modes such as Robust regression and Generalized least squares so that to consolidate the impact of the Foreign Investments and Technological diffusion upon export competitiveness comprehensively. Distributed Lag model has also been used to encompass the lagged effect of policy tools variables used by the government. Model estimates entail that 'FDI' and 'Technological Diffusion' do have a significant impact on the competitiveness of the exports of Pakistan. It may also be inferred that competitiveness of Textile Sector requires integrated policy framework, primarily including the reduction in interest rates, providing subsides, and manufacturing of value added products.

Keywords: high technology export, robust regression, patents, technological diffusion, export competitiveness

Procedia PDF Downloads 486
10614 The Impact of Online Advertising on Generation Y’s Purchase Decision in Malaysia

Authors: Mui Joo Tang, Eang Teng Chan

Abstract:

Advertising is commonly used to foster sales and reputation of an institution. It is at first the growth of print advertising that has increased the population and number of periodicals of newspaper and its circulation. The rise of Internet and online media has somehow blurred the role of media and advertising though the intention is still to reach out to audience and to increase sales. The relationship between advertising and audience on a product purchase through persuasion has been developing from print media to online media. From the changing media environment and audience, it is the concern of this research to study the impact of online advertising to such a relationship cycle. The content of online advertisements is much of text, multimedia, photo, audio and video. The messages of such content format may indeed bring impacts to its audience and its credibility. This study is therefore reflecting the effectiveness of online advertisement and its influences on generation Y in their purchasing behavior. This study uses Media Dependency Theory to analyze the relationship between the impact of online advertisement and media usage pattern of generation Y. Hierarchy of Effectiveness Model is used as a marketing communication model to study the effectiveness of advertising and further to determine the impact of online advertisement on generation Y in their purchasing decision making. This research uses online survey to reach out the sample of generation Y. The results have shown that online advertisements do not affect much on purchase decision making even though generation Y relies much on the media content including online advertisement for its information and believing in its credibility. There are few other external factors that may interrupt the effectiveness of online advertising. The very obvious influence of purchasing behavior is actually derived from the peers.

Keywords: generation Y, purchase decision, print media, online advertising, persuasion

Procedia PDF Downloads 512
10613 The Influence of Addition of Asparagus Bean Powder (Psophocarpus tetragonolobus) on Gonad Maturity of Nilem Carp (Osteochilus hasselti) at the Floating Net Cage of Cirata Reservoir

Authors: Rita Rostika, Junianto, Zulfiqar W. Ibrahim, Iskandar, Lantun P. Dewanti

Abstract:

The purpose of this research is to determine the influence of asparagus bean powder and its most effective administration dose to improve the gonad maturity of nilem carp (Osteochilus hasselti). The research is conducted in October-July 2017 located at Cirata Reservoir and Aquaculture Laboratory, Faculty of Fisheries and Marine Sciences, Padjadjaran University, Jatinangor. The research employs an experimental method using a Complete Random Design (RAL) with six treatments and three repetitions. The treatments include the addition of asparagus bean powder by 0% (Control), 4% per kg of feed, 5% per kg of feed, 6% per kg of feed, 7% per kg of feed, as well as the addition of vitamin E essential as the control. The results show that the addition of asparagus bean powder to the feed may influence the gonad maturity of nilem carp shown by its Gonado Somatic Index (GSI) parameter, fecundity, egg diameter and egg reaching its maturity phase or GVBD (Germinal Vesicle Breakdown). The best administration dose influencing nilem carp is the addition of asparagus bean powder by 7% per kg of feed with the average GSI of 15.02%, relative fecundity of 137 eggs/g of fish parent weight, egg diameter of 1,263 mm, and egg reaching its maturity phase (GVBD) of 78.15%.

Keywords: asparagus bean powder, nilem carp, gonad maturity, Cirata reservoir

Procedia PDF Downloads 144
10612 Evaluation of the Need for Seismic Retrofitting of the Foundation of a Five Story Steel Building Because of Adding of a New Story

Authors: Mohammadreza Baradaran, F. Hamzezarghani

Abstract:

Every year in different points of the world it occurs with different strengths and thousands of people lose their lives because of this natural phenomenon. One of the reasons for destruction of buildings because of earthquake in addition to the passing of time and the effect of environmental conditions and the wearing-out of a building is changing the uses of the building and change the structure and skeleton of the building. A large number of structures that are located in earthquake bearing areas have been designed according to the old quake design regulations which are out dated. In addition, many of the major earthquakes which have occurred in recent years, emphasize retrofitting to decrease the dangers of quakes. Retrofitting structural quakes available is one of the most effective methods for reducing dangers and compensating lack of resistance caused by the weaknesses existing. In this article the foundation of a five-floor steel building with the moment frame system has been evaluated for quakes and the effect of adding a floor to this five-floor steel building has been evaluated and analyzed. The considered building is with a metallic skeleton and a piled roof and clayed block which after addition of a floor has increased to a six-floor foundation of 1416 square meters, and the height of the sixth floor from ground state has increased 18.95 meters. After analysis of the foundation model, the behavior of the soil under the foundation and also the behavior of the body or element of the foundation has been evaluated and the model of the foundation and its type of change in form and the amount of stress of the soil under the foundation for some of the composition has been determined many times in the SAFE software modeling and finally the need for retrofitting of the building's foundation has been determined.

Keywords: seismic, rehabilitation, steel building, foundation

Procedia PDF Downloads 264
10611 Physical and Chemical Alternative Methods of Fresh Produce Disinfection

Authors: Tuji Jemal Ahmed

Abstract:

Fresh produce is an essential component of a healthy diet. However, it can also be a potential source of pathogenic microorganisms that can cause foodborne illnesses. Traditional disinfection methods, such as washing with water and chlorine, have limitations and may not effectively remove or inactivate all microorganisms. This has led to the development of alternative/new methods of fresh produce disinfection, including physical and chemical methods. In this paper, we explore the physical and chemical new methods of fresh produce disinfection, their advantages and disadvantages, and their suitability for different types of produce. Physical methods of disinfection, such as ultraviolet (UV) radiation and high-pressure processing (HPP), are crucial in ensuring the microbiological safety of fresh produce. UV radiation uses short-wavelength UV-C light to damage the DNA and RNA of microorganisms, and HPP applies high levels of pressure to fresh produce to reduce the microbial load. These physical methods are highly effective in killing a wide range of microorganisms, including bacteria, viruses, and fungi. However, they may not penetrate deep enough into the product to kill all microorganisms and can alter the sensory characteristics of the product. Chemical methods of disinfection, such as acidic electrolyzed water (AEW), ozone, and peroxyacetic acid (PAA), are also important in ensuring the microbiological safety of fresh produce. AEW uses a low concentration of hypochlorous acid and a high concentration of hydrogen ions to inactivate microorganisms, ozone uses ozone gas to damage the cell membranes and DNA of microorganisms, and PAA uses a combination of hydrogen peroxide and acetic acid to inactivate microorganisms. These chemical methods are highly effective in killing a wide range of microorganisms, but they may cause discoloration or changes in the texture and flavor of some products and may require specialized equipment and trained personnel to produce and apply. In conclusion, the selection of the most suitable method of fresh produce disinfection should take into consideration the type of product, the level of microbial contamination, the effectiveness of the method in reducing the microbial load, and any potential negative impacts on the sensory characteristics, nutritional composition, and safety of the produce.

Keywords: fresh produce, pathogenic microorganisms, foodborne illnesses, disinfection methods

Procedia PDF Downloads 58
10610 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 120
10609 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis

Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman

Abstract:

Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.

Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test

Procedia PDF Downloads 103
10608 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 189
10607 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 78
10606 The Big Bang Was Not the Beginning, but a Repeating Pattern of Expansion and Contraction of the Spacetime

Authors: Amrit Ladhani

Abstract:

The cyclic universe theory is a model of cosmic evolution according to which the universe undergoes endless cycles of expansion and cooling, each beginning with a “big bang” and ending in a “big crunch”. In this paper, we propose a unique property of Space-time. This particular and marvelous nature of space shows us that space can stretch, expand, and shrink. This property of space is caused by the size of the universe change over time: growing or shrinking. The observed accelerated expansion, which relates to the stretching of Shrunk space for the new theory, is derived. This theory is based on three underlying notions: First, the Big Bang is not the beginning of Space-time, but rather, at the very beginning fraction of a second, there was an infinite force of infinite Shrunk space in the cosmic singularity that force gave rise to the big bang and caused the rapidly growing of space, and all other forms of energy are transformed into new matter and radiation and a new period of expansion and cooling begins. Second, there was a previous phase leading up to it, with multiple cycles of contraction and expansion that repeat indefinitely. Third, the two principal long-range forces are the gravitational force and the repulsive force generated by shrink space. They are the two most fundamental quantities in the universe that govern cosmic evolution. They may provide the clockwork mechanism that operates our eternal cyclic universe. The universe will not continue to expand forever; no need, however, for dark energy and dark matter. This new model of Space-time and its unique properties enables us to describe a sequence of events from the Big Bang to the Big Crunch.

Keywords: dark matter, dark energy, cosmology, big bang and big crunch

Procedia PDF Downloads 61
10605 Modified Model-Based Systems Engineering Driven Approach for Defining Complex Energy Systems

Authors: Akshay S. Dalvi, Hazim El-Mounayri

Abstract:

The internal and the external interactions between the complex structural and behavioral characteristics of the complex energy system result in unpredictable emergent behaviors. These emergent behaviors are not well understood, especially when modeled using the traditional top-down systems engineering approach. The intrinsic nature of current complex energy systems has called for an elegant solution that provides an integrated framework in Model-Based Systems Engineering (MBSE). This paper mainly presents a MBSE driven approach to define and handle the complexity that arises due to emergent behaviors. The approach provides guidelines for developing system architecture that leverages in predicting the complexity index of the system at different levels of abstraction. A framework that integrates indefinite and definite modeling aspects is developed to determine the complexity that arises during the development phase of the system. This framework provides a workflow for modeling complex systems using Systems Modeling Language (SysML) that captures the system’s requirements, behavior, structure, and analytical aspects at both problem definition and solution levels. A system architecture for a district cooling plant is presented, which demonstrates the ability to predict the complexity index. The result suggests that complex energy systems like district cooling plant can be defined in an elegant manner using the unconventional modified MBSE driven approach that helps in estimating development time and cost.

Keywords: district cooling plant, energy systems, framework, MBSE

Procedia PDF Downloads 120
10604 VHL, PBRM1, and SETD2 Genes in Kidney Cancer: A Molecular Investigation

Authors: Rozhgar A. Khailany, Mehri Igci, Emine Bayraktar, Sakip Erturhan, Metin Karakok, Ahmet Arslan

Abstract:

Kidney cancer is the most lethal urological cancer accounting for 3% of adult malignancies. VHL, a tumor-suppressor gene, is best known to be associated with renal cell carcinoma (RCC). The VHL functions as negative regulator of hypoxia inducible factors. Recent sequencing efforts have identified several novel frequent mutations of histone modifying and chromatin remodeling genes in ccRCC (clear cell RCC) including PBRM1 and SETD2. The PBRM1 gene encodes the BAF180 protein, which involved in transcriptional activation and repression of selected genes. SETD2 encodes a histone methyltransferase, which may play a role in suppressing tumor development. In this study, RNAs of 30 paired tumor and normal samples that were grouped according to the types of kidney cancer and clinical characteristics of patients, including gender and average age were examined by RT-PCR, SSCP and sequencing techniques. VHL, PBRM1 and SETD2 expressions were relatively down-regulated. However, statistically no significance was found (Wilcoxon signed rank test, p > 0.05). Interestingly, no mutation was observed on the contrary of previous studies. Understanding the molecular mechanisms involved in the pathogenesis of RCC has aided the development of molecular-targeted drugs for kidney cancer. Further analysis is required to identify the responsible genes rather than VHL, PBRM1 and SETD2 in kidney cancer.

Keywords: kidney cancer, molecular biomarker, expression analysis, mutation screening

Procedia PDF Downloads 442
10603 Studying the Effectiveness of Using Narrative Animation on Students’ Understanding of Complex Scientific Concepts

Authors: Atoum Abdullah

Abstract:

The purpose of this research is to determine the extent to which computer animation and narration affect students’ understanding of complex scientific concepts and improve their exam performance, this is compared to traditional lectures that include PowerPoints with texts and static images. A mixed-method design in data collection was used, including quantitative and qualitative data. Quantitative data was collected using a pre and post-test method and a close-ended questionnaire. Qualitative data was collected through an open-ended questionnaire. A pre and posttest strategy was used to measure the level of students’ understanding with and without the use of animation. The test included multiple-choice questions to test factual knowledge, open-ended questions to test conceptual knowledge, and to label the diagram questions to test application knowledge. The results showed that students on average, performed significantly higher on the posttest as compared to the pretest on all areas of acquired knowledge. However, the increase in the posttest score with respect to the acquisition of conceptual and application knowledge was higher compared to the increase in the posttest score with respect to the acquisition of factual knowledge. This result demonstrates that animation is more beneficial when acquiring deeper, conceptual, and cognitive knowledge than when only factual knowledge is acquired.

Keywords: animation, narration, science, teaching

Procedia PDF Downloads 158
10602 The Role of Information and Communication Technology to Enhance Transparency in Public Funds Management in the DR Congo

Authors: Itulelo Matiyabu Imaja, Manoj Maharaj, Patrick Ndayizigamiye

Abstract:

Lack of transparency in public funds management is observed in many African countries. The DR Congo is among the most corrupted countries in Africa, and this is due mainly to lack of transparency and accountability in public funds management. Corruption has a negative effect on the welfare of the country’s citizens and the national economic growth. Public funds collection and allocation are the major areas whereby malpractices such as bribe, extortion, embezzlement, nepotism and other practices related to corruption are prevalent. Hence, there is a need to implement strong mechanisms to enforce transparency in public funds management. Many researchers have suggested some control mechanisms in curbing corruption in public funds management focusing mainly on law enforcement and administrative reforms with little or no insight on the role that ICT can play in preventing and curbing the corrupt behaviour. In the Democratic Republic of Congo (DRC), there are slight indications that the government of the DR Congo is integrating ICT to fight corruption in public funds collection and allocation. However, such government initiatives are at an infancy stage, with no tangible evidence on how ICT could be used effectively to address the issue of corruption in the context of the country. Hence, this research assesses the role that ICT can play for transparency in public funds management and suggest a framework for its adoption in the Democratic Republic of Congo. This research uses the revised Capability model (Capability, Empowerment, Sustainability model) as the guiding theoretical framework. The study uses the exploratory design methodology coupled with a qualitative approach to data collection and purposive sampling as sampling strategy.

Keywords: corruption, DR congo, ICT, management, public funds, transparency

Procedia PDF Downloads 323
10601 Magnetic Investigation and 2½D Gravity Profile Modelling across the Beattie Magnetic Anomaly in the Southeastern Karoo Basin, South Africa

Authors: Christopher Baiyegunhi, Oswald Gwavava

Abstract:

The location/source of the Beattie magnetic anomaly (BMA) and interconnectivity of geologic structures at depth have been a topic of investigation for over 30 years. Up to now, no relationship between geological structures (interconnectivity of dolerite intrusions) at depth has been established. Therefore, the environmental impact of fracking the Karoo for shale gas could not be assessed despite the fact that dolerite dykes are groundwater localizers in the Karoo. In this paper, we shed more light to the unanswered questions concerning the possible location of the source of the BMA, the connectivity of geologic structures like dolerite dykes and sills at depth and this relationship needs to be established before the tectonic evolution of the Karoo basin can be fully understood and related to fracking of the Karoo for shale gas. The result of the magnetic investigation and modelling of four gravity profiles that crosses the BMA in the study area reveals that the anomaly, which is part of the Beattie magnetic anomaly tends to divide into two anomalies and continue to trend in an NE-SW direction, the dominant gravity signatures is of long wavelength that is due to a deep source/interface inland and shallows towards the coast, the average depth to the top of the shallow and deep magnetic sources was estimated to be approximately 0.6 km and 15 km, respectively. The BMA become stronger with depth which could be an indication that the source(s) is deep possibly a buried body in the basement. The bean-shaped anomaly also behaves in a similar manner like the BMA thus it could possibly share the same source(s) with the BMA.

Keywords: Beattie magnetic anomaly, magnetic sources, modelling, Karoo Basin

Procedia PDF Downloads 539
10600 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations

Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim

Abstract:

A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.

Keywords: gerotor pump, high speed, numerical simulations, aeronautic, aeration, cavitation

Procedia PDF Downloads 121
10599 Testing Nature Based Solutions for Air Quality Improvement: Aveiro Case Study

Authors: A. Ascenso, C. Silveira, B. Augusto, S. Rafael, S. Coelho, J. Ferreira, A. Monteiro, P. Roebeling, A. I. Miranda

Abstract:

Innovative nature-based solutions (NBSs) can provide answers to the challenges that urban areas are currently facing due to urban densification and extreme weather conditions. The effects of NBSs are recognized and include improved quality of life, mental and physical health and improvement of air quality, among others. Part of the work developed in the scope of the UNaLab project, which aims to guide cities in developing and implementing their own co-creative NBSs, intends to assess the impacts of NBSs on air quality, using Eindhoven city as a case study. The state-of-the-art online air quality modelling system WRF-CHEM was applied to simulate meteorological and concentration fields over the study area with a spatial resolution of 1 km2 for the year 2015. The baseline simulation (without NBSs) was validated by comparing the model results with monitored data retrieved from the Eindhoven air quality database, showing an adequate model performance. In addition, land use changes were applied in a set of simulations to assess the effects of different types of NBSs. Finally, these simulations were compared with the baseline scenario and the impacts of the NBSs were assessed. Reductions on pollutant concentrations, namely for NOx and PM, were found after the application of the NBSs in the Eindhoven study area. The present work is particularly important to support public planners and decision makers in understanding the effects of their actions and planning more sustainable cities for the future.

Keywords: air quality, modelling approach, nature based solutions, urban area

Procedia PDF Downloads 229