Search results for: computer case simulation
3318 Design and Analysis of a Combined Cooling, Heating and Power Plant for Maximum Operational Flexibility
Authors: Salah Hosseini, Hadi Ramezani, Bagher Shahbazi, Hossein Rabiei, Jafar Hooshmand, Hiwa Khaldi
Abstract:
Diversity of energy portfolio and fluctuation of urban energy demand establish the need for more operational flexibility of combined Cooling, Heat, and Power Plants. Currently, the most common way to achieve these specifications is the use of heat storage devices or wet operation of gas turbines. The current work addresses using variable extraction steam turbine in conjugation with a gas turbine inlet cooling system as an alternative way for enhancement of a CCHP cycle operating range. A thermodynamic model is developed and typical apartments building in PARDIS Technology Park (located at Tehran Province) is chosen as a case study. Due to the variable Heat demand and using excess chiller capacity for turbine inlet cooling purpose, the mentioned steam turbine and TIAC system provided an opportunity for flexible operation of the cycle and boosted the independence of the power and heat generation in the CCHP plant. It was found that the ratio of power to the heat of CCHP cycle varies from 12.6 to 2.4 depending on the City heating and cooling demands and ambient condition, which means a good independence between power and heat generation. Furthermore, selection of the TIAC design temperature is done based on the amount of ratio of power gain to TIAC coil surface area, it was found that for current cycle arrangement the TIAC design temperature of 15 C is most economical. All analysis is done based on the real data, gathered from the local weather station of the PARDIS site.Keywords: CCHP plant, GTG, HRSG, STG, TIAC, operational flexibility, power to heat ratio
Procedia PDF Downloads 2813317 I Post Therefore I Am! Construction of Gendered Identities in Facebook Communication of Pakistani Male and Female Users
Authors: Rauha Salam
Abstract:
In Pakistan, over the past decade, the notion of what counts as a true ‘masculine and feminine’ behaviour has become more complicated with the inspection of social media. Given its strong religious and socio-cultural norms, patriarchal values are entrenched in the local and cultural traditions of the Pakistani society and regulate the social value of gender. However, the increasing use of internet among Pakistani men and women, especially in the form of social media uses by the youth, is increasingly becoming disruptive and challenging to the strict modes of behavioural monitoring and control both at familial and state level. Facebook, being the prime social media communication platform in Pakistan, provide its users a relatively ‘safe’ place to embrace how they want to be perceived by their audience. Moreover, the availability of an array of semiotic resources (e.g. the videos, audios, visuals and gifs) on Facebook makes it possible for the users to create a virtual identity that allows them to describe themselves in detail. By making use of Multimodal Discourse Analysis, I aimed to investigate how men and women in Pakistan construct their gendered identities multimodally (visually and linguistically) through their Facebook posts and how these semiotic modes are interconnected to communicate specific meanings. In case of the female data, the analysis showed an ambivalence as females were found to be conforming to the existing socio-cultural norms of the society and they were also employing social media platforms to deviate from traditional gendered patterns and to voice their opinions simultaneously. Similarly, the male data highlighted the reproduction of the prevalent cultural models of masculinity. However, there were instances in the data that showed a digression from the standard norms and there is a (re)negotiation of the traditional patriarchal representations.Keywords: Facebook, Gendered Identities, Multimodal Discourse Analysis, Pakistan
Procedia PDF Downloads 1173316 Effects of Corruption and Logistics Performance Inefficiencies on Container Throughput: The Latin America Case
Authors: Fernando Seabra, Giulia P. Flores, Karolina C. Gomes
Abstract:
Trade liberalizations measures, as import tariff cuts, are not a sufficient trigger for trade growth. Given that price margins are narrow, traders and cargo operators tend to opt out of markets where the process of goods clearance is slow and costly. Excess paperwork and slow customs dispatch not only lead to institutional breakdowns and corruption but also to increasing transaction cost and trade constraints. The objective of this paper is, therefore, two-fold: First, to evaluate the relationship between institutional and infrastructural performance indexes and trade growth in container throughput; and, second, to investigate the causes for differences in container demurrage and detention fees in Latin American countries (using other emerging countries as benchmarking). The analysis is focused on manufactured goods, typically transported by containers. Institutional and infrastructure bottlenecks and, therefore, the country logistics efficiency – measured by the Logistics Performance Index (LPI, World Bank-WB) – are compared with other indexes, such as the Doing Business index (WB) and the Corruption Perception Index (Transparency International). The main results based on the comparison between Latin American countries and the others emerging countries point out in that the growth in containers trade is directly related to LPI performance. It has also been found that the main hypothesis is valid as aspects that more specifically identify trade facilitation and corruption are significant drivers of logistics performance. The exam of port efficiency (demurrage and detention fees) has demonstrated that not necessarily higher level of efficiency is related to lower charges; however, reductions in fees have been more significant within non-Latin American emerging countries.Keywords: corruption, logistics performance index, container throughput, Latin America
Procedia PDF Downloads 2503315 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 1543314 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model
Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech
Abstract:
Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM
Procedia PDF Downloads 1363313 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging
Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen
Abstract:
Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the study of pathogenesis, biological characteristics and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, LipoDox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using Magnetic Resonance Imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thigh were classified into three groups: control group (untreated), Dox-treated group and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, Three-Dimensional (3D) Fast Spin Echo (FSE) T2-Weighted Images (T2WI) was used for tumor volumetric quantification. And Two-Dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic Contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbits. Afterwards, a series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.Keywords: doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model
Procedia PDF Downloads 4573312 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1033311 Writing a Parametric Design Algorithm Based on Recreation and Structural Analysis of Patkane Model: The Case Study of Oshtorjan Mosque
Authors: Behnoush Moghiminia, Jesus Anaya Diaz
Abstract:
The current study attempts to present the relationship between the structure development and Patkaneh as one of the Iranian geometric patterns and parametric algorithms by introducing two practical methods. While having a structural function, Patkaneh is also used as an ornamental element. It can be helpful in the scientific and practical review of Patkaneh. The current study aims to use Patkaneh as a parametric form generator based on the algorithm. The current paper attempts to express how can a more complete algorithm of this covering be obtained based on the parametric study and analysis of a sample of a Patkaneh and also investigate the relationship between the development of the geometrical pattern of Patkaneh as a structural-decorative element of Iranian architecture and digital design. In this regard, to achieve the research purposes, researchers investigated the oldest type of Patkaneh in the architecture history of Iran, such as the Northern Entrance Patkaneh of Oshtorjan Jame’ Mosque. An accurate investigation was done on the history of the background to answer the questions. Then, by investigating the structural behavior of Patkaneh, the decorative or structural-decorative role of Patkaneh was investigated to eliminate the ambiguity. Then, the geometrical structure of Patkaneh was analyzed by introducing two practical methods. The first method is based on the constituent units of Patkaneh (Square and diamond) and investigating the interactive relationships between them in 2D and 3D. This method is appropriate for cases where there are rational and regular geometrical relationships. The second method is based on the separation of the floors and the investigation of their interrelation. It is practical when the constituent units are not geometrically regular and have numerous diversity. Finally, the parametric form algorithm of these methods was codified.Keywords: geometric properties, parametric design, Patkaneh, structural analysis
Procedia PDF Downloads 1513310 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 2923309 Using Large Databases and Interviews to Explore the Temporal Phases of Technology-Based Entrepreneurial Ecosystems
Authors: Elsie L. Echeverri-Carroll
Abstract:
Entrepreneurial ecosystems have become an important concept to explain the birth and sustainability of technology-based entrepreneurship within regions. However, as a theoretical concept, the temporal evolution of entrepreneurship systems remain underdeveloped, making it difficult to understand their dynamic contributions to entrepreneurs. This paper argues that successful technology-based ecosystems go over three cumulative spawning stages: corporate spawning, entrepreneurial spawning, and community spawning. The importance of corporate incubation in vibrant entrepreneurial ecosystems is well documented in the entrepreneurial literature. Similarly, entrepreneurial spawning processes for venture capital-backed startups are well documented in the financial literature. In contrast, there is little understanding of both the third stage of entrepreneurial spawning (when a community of entrepreneurs become a source of firm spawning) and the temporal sequence in which spawning effects occur in a region. We test this three-stage model of entrepreneurial spawning using data from two large databases on firm births—the Secretary of State (160,000 observations) and the National Establishment Time Series (NEST with 150,000 observations)—and information collected from 60 1½-hour interviews with startup founders and representatives of key entrepreneurial organizations. This temporal model is illustrated with case study of Austin, Texas ranked by the Kauffman Foundation as the number one entrepreneurial city in the United States in 2015 and 2016. The 1½-year study founded by the Kauffman Foundation demonstrates the importance of taken into consideration the temporal contributions of both large and entrepreneurial firms in understanding the factors that contribute to the birth and growth of technology-based entrepreneurial regions. More important, these learnings could offer an important road map for regions that pursue to advance their entrepreneurial ecosystems.Keywords: entrepreneurial ecosystems, entrepreneurial industrial clusters, high-technology, temporal changes
Procedia PDF Downloads 2723308 Amblyopia and Eccentric Fixation
Authors: Kristine Kalnica-Dorosenko, Aiga Svede
Abstract:
Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy
Procedia PDF Downloads 1583307 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves
Authors: Hanifeh Imanian, Morteza Kolahdoozan
Abstract:
The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill
Procedia PDF Downloads 2333306 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels
Authors: Shin Woo Kim, Eui Ju Lee
Abstract:
The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)
Procedia PDF Downloads 2123305 Stronger Together – Micro-Entrepreneurs’ Resilience Development in a Communal Training Space
Authors: Halonen
Abstract:
Covid-19 pandemic and the succeeding crises have profoundly shaken the accustomed ways of interaction and thereby challenged the customary engagement patterns among entrepreneurs Consequently, this has led to the experience of lack of collegial interaction for some. Networks and relationships are a crucial factor to strengthening resilience, being especially significant in non-ordinary times. This study aims to shed light on entrepreneurs’ resilience development in and through entrepreneurs’ communal and training space. The context for research is a communal training space in a municipality in Finland of which goal is to help entrepreneurs to experience of peer support and community as part of the "tribe" is strengthened, the entrepreneurs' well-being at work, resilience, ability to change, innovativeness and general life management is strengthened. This communal space is regarded as an example of a physical community of practice (CoP) of entrepreneurs. The research aims to highlight the importance of rediscovering the “new normal” communality as itself but as a key building block of resilience. The initial research questions of the study are: RQ1: What is the role of entrepreneurs’ CoP and communal space in nurturing resilience development among them? RQ2: What positive entrepreneurial outcomes can be achieved through established CoP. The data will be gathered starting from the launch of the communality space in September 2023 onwards. It includes participatory observations of training gatherings, interviews with entrepreneurs and utilizes action research as the method. The author has an active role in participating and facilitating the development. The full paper will be finalized by the fall 2024. The idea of the new normal communality in a CoP among entrepreneurs is to be rediscovered due to its positive impact on entrepreneur’s resilience and business success. The other implications of study can extend to wider entrepreneurial ecosystem and other key stakeholders. Especially emphasizing the potential of communality in CoP for fostering entrepreneurs’ resilience and well-being ensuing business growth, community-driven entrepreneurship development and vitality of the case municipality.Keywords: resilience, resilience development, communal space, community of practice (CoP)
Procedia PDF Downloads 743304 Autoimmune Diseases Associated with Primary Biliary Cirrhosis: A Retrospective Study of 51 Patients
Authors: Soumaya Mrabet, Imen Akkari, Amira Atig, Elhem Ben Jazia
Abstract:
Introduction: Primary biliary cirrhosis (PBC) is a cholestatic cholangitis of unknown etiology. It is frequently associated with autoimmune diseases, which explains their systematic screening. The aim of our study was to determine the prevalence and the type of autoimmune disorders associated with PBC and to assess their impact on the prognosis of the disease. Material and methods: It is a retrospective study over a period of 16 years (2000-2015) including all patients followed for PBC. In all these patients we have systematically researched: dysthyroidism (thyroid balance, antithyroid autoantibodies), type 1 diabetes, dry syndrome (ophthalmologic examination, Schirmer test and lip biopsy in case of Presence of suggestive clinical signs), celiac disease(celiac disease serology and duodenal biopsies) and dermatological involvement (clinical examination). Results: Fifty-one patients (50 women and one men) followed for PBC were collected. The Mean age was 54 years (37-77 years). Among these patients, 30 patients(58.8%) had at least one autoimmune disease associated with PBC. The discovery of these autoimmune diseases preceded the diagnosis of PBC in 8 cases (26.6%) and was concomitant, through systematic screening, in the remaining cases. Autoimmune hepatitis was found in 12 patients (40%), defining thus an overlap syndrome. Other diseases were Hashimoto's thyroiditis (n = 10), dry syndrome (n = 7), Gougerot Sjogren syndrome (n=6), celiac disease (n = 3), insulin-dependent diabetes (n = 1), scleroderma (n = 1), rheumatoid arthritis (n = 1), Biermer Anemia (n=1) and Systemic erythematosus lupus (n=1). The two groups of patients with PBC with or without associated autoimmune disorders were comparable for bilirubin levels, Child-Pugh score, and response to treatment. Conclusion: In our series, the prevalence of autoimmune diseases in PBC was 58.8%. These diseases were dominated by autoimmune hepatitis and Hashimoto's thyroiditis. Even if their association does not seem to alter the prognosis, screening should be systematic in order to institute an early and adequate management.Keywords: autoimmune diseases, autoimmune hepatitis, primary biliary cirrhosis, prognosis
Procedia PDF Downloads 2763303 Urban Seismic Risk Reduction in Algeria: Adaptation and Application of the RADIUS Methodology
Authors: Mehdi Boukri, Mohammed Naboussi Farsi, Mounir Naili, Omar Amellal, Mohamed Belazougui, Ahmed Mebarki, Nabila Guessoum, Brahim Mezazigh, Mounir Ait-Belkacem, Nacim Yousfi, Mohamed Bouaoud, Ikram Boukal, Aboubakr Fettar, Asma Souki
Abstract:
The seismic risk to which the urban centres are more and more exposed became a world concern. A co-operation on an international scale is necessary for an exchange of information and experiments for the prevention and the installation of action plans in the countries prone to this phenomenon. For that, the 1990s was designated as 'International Decade for Natural Disaster Reduction (IDNDR)' by the United Nations, whose interest was to promote the capacity to resist the various natural, industrial and environmental disasters. Within this framework, it was launched in 1996, the RADIUS project (Risk Assessment Tools for Diagnosis of Urban Areas Against Seismic Disaster), whose the main objective is to mitigate seismic risk in developing countries, through the development of a simple and fast methodological and operational approach, allowing to evaluate the vulnerability as well as the socio-economic losses, by probable earthquake scenarios in the exposed urban areas. In this paper, we will present the adaptation and application of this methodology to the Algerian context for the seismic risk evaluation in urban areas potentially exposed to earthquakes. This application consists to perform an earthquake scenario in the urban centre of Constantine city, located at the North-East of Algeria, which will allow the building seismic damage estimation of this city. For that, an inventory of 30706 building units was carried out by the National Earthquake Engineering Research Centre (CGS). These buildings were digitized in a data base which comprises their technical information by using a Geographical Information system (GIS), and then they were classified according to the RADIUS methodology. The study area was subdivided into 228 meshes of 500m on side and Ten (10) sectors of which each one contains a group of meshes. The results of this earthquake scenario highlights that the ratio of likely damage is about 23%. This severe damage results from the high concentration of old buildings and unfavourable soil conditions. This simulation of the probable seismic damage of the building and the GIS damage maps generated provide a predictive evaluation of the damage which can occur by a potential earthquake near to Constantine city. These theoretical forecasts are important for decision makers in order to take the adequate preventive measures and to develop suitable strategies, prevention and emergency management plans to reduce these losses. They can also help to take the adequate emergency measures in the most impacted areas in the early hours and days after an earthquake occurrence.Keywords: seismic risk, mitigation, RADIUS, urban areas, Algeria, earthquake scenario, Constantine
Procedia PDF Downloads 2623302 Association of Brain-Derived Neurotrophic Factor (BDNF) Gene with Obesity and Metabolic Traits in Malaysian Adults
Authors: Yamunah Devi Apalasamy, Sanjay Rampal, Tin Tin Su, Foong Ming Moy, Hazreen Abdul Majid, Awang Bulgiba, Zahurin Mohamed
Abstract:
Obesity is a growing global health issue. Obesity results from a combination of environmental and genetics factors. Brain-derived neurotrophic factor (BDNF), a gene encodes the BDNF protein and the BDNF gene have been linked to regulation of body weight and appetite. Genome-wide association studies have identified the BDNF variants to be related to obesity among Caucasians, East Asians, and Filipinos. However, the role of BDNF in other ethnic groups remains inconclusive. This case control study aims to investigate the associations of BDNF gene polymorphisms with obesity and metabolic parameters in Malaysian Malays. BDNF rs4074134, BDNF rs10501087 and BDNF rs6265 were genotyped using Sequenom MassARRAY. Anthropometric, body fat, fasting lipids and glucose levels were measured. A total of 663 subjects (194 obese and 469 non-obese) were included in this study. There were no significant associations association between BDNF SNPs and obesity. The allelic and genotype frequencies of the BDNF SNPs were similar in the obese and non-obese groups. After adjustment for age and sex, the BDNF variants were not associated with obesity, body fat, fasting lipids and glucose levels. Haplotypes at the BDNF gene region, were not significantly associated with obesity. The BDNF rs4074134 was in strong LD with BDNF rs10501087 (D'=0.98) and BDNF rs6265 (D'=0.87). The BDNF rs10501087 was also in strong LD with BDNF rs6265 (D'=0.91). Our findings suggest that the BDNF variants and the haplotypes of BDNF gene were not associated with obesity and metabolic traits in this study population. Further research is needed to explore other BDNF variants with a larger sample size with gene-environment interactions in multi ethnic Malaysian population.Keywords: genomics of obesity, SNP, BMI, haplotypes
Procedia PDF Downloads 4303301 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm
Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar
Abstract:
The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations
Procedia PDF Downloads 4153300 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method
Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky
Abstract:
It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finite-elements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.Keywords: residual welding deformations, longitudinal and transverse shortenings of welding joints, method of analytic dependences, finite elements method
Procedia PDF Downloads 4093299 Dogmatic Analysis of Legal Risks of Using Artificial Intelligence: The European Union and Polish Perspective
Authors: Marianna Iaroslavska
Abstract:
ChatGPT is becoming commonplace. However, only a few people think about the legal risks of using Large Language Model in their daily work. The main dilemmas concern the following areas: who owns the copyright to what somebody creates through ChatGPT; what can OpenAI do with the prompt you enter; can you accidentally infringe on another creator's rights through ChatGPT; what about the protection of the data somebody enters into the chat. This paper will present these and other legal risks of using large language models at work using dogmatic methods and case studies. The paper will present a legal analysis of AI risks against the background of European Union law and Polish law. This analysis will answer questions about how to protect data, how to make sure you do not violate copyright, and what is at stake with the AI Act, which recently came into force in the EU. If your work is related to the EU area, and you use AI in your work, this paper will be a real goldmine for you. The copyright law in force in Poland does not protect your rights to a work that is created with the help of AI. So if you start selling such a work, you may face two main problems. First, someone may steal your work, and you will not be entitled to any protection because work created with AI does not have any legal protection. Second, the AI may have created the work by infringing on another person's copyright, so they will be able to claim damages from you. In addition, the EU's current AI Act imposes a number of additional obligations related to the use of large language models. The AI Act divides artificial intelligence into four risk levels and imposes different requirements depending on the level of risk. The EU regulation is aimed primarily at those developing and marketing artificial intelligence systems in the EU market. In addition to the above obstacles, personal data protection comes into play, which is very strictly regulated in the EU. If you violate personal data by entering information into ChatGPT, you will be liable for violations. When using AI within the EU or in cooperation with entities located in the EU, you have to take into account a lot of risks. This paper will highlight such risks and explain how they can be avoided.Keywords: EU, AI act, copyright, polish law, LLM
Procedia PDF Downloads 213298 Decision Support Tool for Selecting Appropriate Sustainable Rainwater Harvesting Based System in Ibadan, Nigeria
Authors: Omolara Lade, David Oloke
Abstract:
The approach to water management worldwide is currently in transition, with a shift from centralised infrastructures to greater consideration of decentralised technologies, such as rainwater harvesting (RWH). However, in Nigeria, implementation of sustainable water management, such as RWH systems, is inefficient and social, environmental and technical barriers, concerns and knowledge gaps exist, which currently restrict its widespread utilisation. This inefficiency contributes to water scarcity, water-borne diseases, and loss of lives and property due to flooding. Meanwhile, several RWH technologies have been developed to improve SWM through both demand and storm-water management. Such technologies involve the use of reinforced concrete cement (RCC) storage tanks, surface water reservoirs and ground-water recharge pits as storage systems. A framework was developed to assess the significance and extent of water management problems, match the problems with existing RWH-based solutions and develop a robust ready-to-use decision support tool that can quantify the costs and benefits of implementing several RWH-based storage systems. The methodology adopted was the mixed method approach, involving a detailed literature review, followed by a questionnaire survey of household respondents, Nigerian Architects and Civil Engineers and focus group discussion with stakeholders. 18 selection attributes have been defined and three alternatives have been identified in this research. The questionnaires were analysed using SPSS, excel and selected statistical methods to derive weightings of the attributes for the tool. Following this, three case studies were modelled using RainCycle software. From the results, the MDA model chose RCC tank as the most appropriate storage system for RWH.Keywords: rainwater harvesting, modelling, hydraulic assessment, whole life cost, decision support system
Procedia PDF Downloads 3713297 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 1103296 Multi-Sensory Coding as Intervention Therapy for ESL Spellers with Auditory Processing Delays: A South African Case-Study
Authors: A. Van Staden, N. Purcell
Abstract:
Spelling development is complex and multifaceted and relies on several cognitive-linguistic processes. This paper explored the spelling difficulties of English second language learners with auditory processing delays. This empirical study aims to address these issues by means of an intervention design. Specifically, the objectives are: (a) to develop and implement a multi-sensory spelling program for second language learners with auditory processing difficulties (APD) for a period of 6 months; (b) to assess the efficacy of the multi-sensory spelling program and whether this intervention could significantly improve experimental learners' spelling, phonological awareness, and processing (PA), rapid automatized naming (RAN), working memory (WM), word reading and reading comprehension; and (c) to determine the relationship (or interplay) between these cognitive and linguistic skills (mentioned above), and how they influence spelling development. Forty-four English, second language learners with APD were sampled from one primary school in the Free State province. The learners were randomly assigned to either an experimental (n=22) or control group (n=22). During the implementation of the spelling program, several visual, tactile and kinesthetic exercises, including the utilization of fingerspelling were introduced to support the experimental learners’ (N = 22) spelling development. Post-test results showed the efficacy of the multi-sensory spelling program, with the experimental group who were trained in utilising multi-sensory coding and fingerspelling outperforming learners from the control group on the cognitive-linguistic, spelling and reading measures. The results and efficacy of this multi-sensory spelling program and the utilisation of fingerspelling for hearing second language learners with APD open up innovative perspectives for the prevention and targeted remediation of spelling difficulties.Keywords: English second language spellers, auditory processing delays, spelling difficulties, multi-sensory intervention program
Procedia PDF Downloads 1363295 Clinical Evaluation of Neutrophil to Lymphocytes Ratio and Platelets to Lymphocytes Ratio in Immune Thrombocytopenic Purpura
Authors: Aisha Arshad, Samina Naz Mukry, Tahir Shamsi
Abstract:
Background: Immune thrombocytopenia (ITP) is an autoimmune disorder. Besides platelets counts, immature platelets fraction (IPF) can be used as tool to predict megakaryocytic activity in ITP patients. The clinical biomarkers like Neutrophils to lymphocytes ratio (NLR) and platelet to lymphocytes ratio(PLR) predicts inflammation and can be used as prognostic markers.The present study was planned to assess the ratios in ITP and their utility in predicting prognosis after treatment. Methods: A total of 111 patients of ITP with same number of healthy individuals were included in this case control study during the period of January 2015 to December 2017.All the ITP patients were grouped according to guidelines of International working group of ITP. A 3cc blood was collected in EDTA tube and blood parameters were evaluated using Sysmex 1000 analyzer.The ratios were calculated by using absolute counts of Neutrophils,Lymphocytes and platelets.The significant (p=<0.05) difference between ITP patients and healthy control groups was determined by Kruskal wallis test, Dunn’s test and spearman’s correlation test was done using SPSS version 23. Results: The significantly raised total leucocytes counts (TLC) and IPF along with low platelets counts were observed in ITP patients as compared to healthy controls.In ITP groups,very low platelet count with median and IQR of 2(3.8)3x109/l with highest mean and IQR IPF 25.4(19.8)% was observed in newly diagnosed ITP group. The NLR was high with prognosis of disease as higher levels were observed in P-ITP. The PLR was significantly low in ND-ITP ,P-ITP, C-ITP, R-ITP and compared to controls with p=<0.001 as platelet were less in number in all ITP patients. Conclusion: The IPF can be used in evaluation of bone marrow response in ITP. The simple, reliable and calculated NLR and PLR ratios can be used in predicting prognosis and response to treatment in ITP and to some extend the severity of disease.Keywords: neutrophils, platelets, lymphocytes, infection
Procedia PDF Downloads 953294 Integration of Agroforestry Shrub for Diversification and Improved Smallholder Production: A Case of Cajanus cajan-Zea Mays (Pigeonpea-Maize) Production in Ghana
Authors: F. O. Danquah, F. Frimpong, E. Owusu Danquah, T. Frimpong, J. Adu, S. K. Amposah, P. Amankwaa-Yeboah, N. E. Amengor
Abstract:
In the face of global concerns such as population increase, climate change, and limited natural resources, sustainable agriculture practices are critical for ensuring food security and environmental stewardship. The study was conducted in the Forest zones of Ghana during the major and minor seasons of 2023 cropping seasons to evaluate maize yield productivity improvement and profitability of integrating Cajanus cajan (pigeonpea) into a maize production system described as a pigeonpea-maize cropping system. This is towards an integrated soil fertility management (ISFM) with a legume shrub pigeonpea for sustainable maize production while improving smallholder farmers' resilience to climate change. A split-plot design with maize-pigeonpea (Pigeonpea-Maize intercrop – MPP and No pigeonpea/ Sole maize – NPP) and inorganic fertilizer rate (250 kg/ha of 15-15-15 N-P2O5-K2O + 250 kg/ha Sulphate of Ammonia (SoA) – Full rate (FR), 125 kg/ha of 15-15-15 N-P2O5-K2O + 125 kg/ha Sulphate of Ammonia (SoA) – Half rate (HR) and no inorganic fertilizer (NF) as control) was used as the main plot and subplot treatments respectively. The results indicated a significant interaction of the pigeonpea-maize cropping system and inorganic fertilizer rate on the growth and yield of the maize with better and similar maize productivity when HR and FR were used with pigeonpea biomass. Thus, the integration of pigeonpea and its biomass would result in the reduction of recommended fertiliser rate to half. This would improve farmers’ income and profitability for sustainable maize production in the face of climate change.Keywords: agroforestry tree, climate change, integrated soil fertility management, resource use efficiency
Procedia PDF Downloads 583293 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: Adesuyi Ayodeji Steve, Zahn Munch
Abstract:
This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.Keywords: change detection, land cover, modis, NDVI
Procedia PDF Downloads 4023292 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation
Authors: Zeynep Yazicioglu
Abstract:
Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project
Procedia PDF Downloads 1023291 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies
Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo
Abstract:
Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system
Procedia PDF Downloads 383290 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors
Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee
Abstract:
Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference
Procedia PDF Downloads 4243289 The Continuation of Trauma through Transcribing: Second Generation Survivors and the Inability for a 'Post-Holocaust'
Authors: Sarah Snyder
Abstract:
Historians use the term ‘post-Holocaust’ to indicate the period from 1945 onward; however, for survivors of the Holocaust and their families, the Holocaust did not end in 1945. In fact, for some, it was just the beginning of their struggles. There are those who could not return to their homes, find loved ones, or fight off night terrors. Additionally, they continue to suffer from mental illness or physical disease stemming from the Holocaust. In order for historians to have a clearer understanding of the trauma survivors have endured, it is must to approach time differently. Trauma does not operate on a timeline and thereby, our understanding of ‘before,’ ‘during’ and ‘after’ are flawed. In order to convey this flaw, this study will examine memoirs of second and third-generation survivors and of child survivors. Within the second and third generation group, there are two types of generational memoirs that are scrutinized for this case study. The first being when a child or grandchild records the stories of their parent(s) or grandparent(s) without any of the second or third generation’s stories implicitly written. ‘Implicitly’ is used in the context that it is impossible for any writer to not impose at least some stylistic portion of themselves into writing, but the intent was to focus on the parent or grandparent. The other type of memoir is when they write their parent(s) or grandparent(s) story intertwined with their own story. Additionally, the child survivor has a unique role in memory and trauma studies. Much like later generations who write about the Holocaust but have not experienced the trauma firsthand, the child survivor must write about what they lived through and experienced but cannot remember without the assistance of research or other survivors. This study shows that survivors continue to demonstrate trauma-related paranoia. They fear experiencing another Holocaust. In their minds, they replay the horrors that they had experienced. A pilgrimage to a 20th century Europe, unlike one of the 1940s, causes uncertainty, confusion, and additional paranoia. It is through these findings that it becomes evident that historians must learn to study trauma without placing strict timelines that prevent understanding of how trauma impacts those who have experienced complex trauma.Keywords: holocaust, generational, memoirs, trauma
Procedia PDF Downloads 203