Search results for: event study
49788 Beyond the White Cube: A Study on the Site Specific Curatorial Practice of Kochi Muziris Biennale
Authors: Girish Chandran, Milu Tigi
Abstract:
Brian O'Doherty's seminal essay, Inside the white Cube theorized and named the dominant mode of display and exhibition of Modern Art museums. Ever since the advent of Biennales and other site-specific public art projects we have seen a departure from the white cube mode of exhibition. The physicality, materiality and context within which an artwork is framed has a role in the production of meaning of public art. Equally, artworks contribute to the meaning and identity of a place. This to and fro relationship between the site and artwork and its influence on the sense of place and production of meaning is being explored in this paper in the context of Kochi Muziris Biennale (KMB). Known as the Peoples biennale with over 5 lakh visitors, it is India's first Biennale and its largest art exhibition of contemporary art. The paper employs place theory and contemporary curatorial theories to present the case. The KMB has an interesting mix of exhibition spaces which includes existing galleries and halls, site-specific projects in public spaces, infill developments and adaptive reuse of heritage and other unused architecture. The biennale was envisioned as an event connecting to the history, socio-political peculiarities of the cultural landscape of Kerala and more specifically Kochi. The paper explains the role of spatial elements in forming a curatorial narrative connected to the above mentioned ambitions.The site-specific nature of exhibition and its use of unused architecture helps in the formation of exhibition spaces unique in type and materiality. The paper argues how this helps in the creation of an 'archeology of the place'. The research elucidates how a composite nature of experience helps connect with the thematic ambitions of the Biennale and how it brings about an aesthetics distinct to KMB.Keywords: public art, curatorial practice, architecture, place, contemporary art, site specificity
Procedia PDF Downloads 15949787 Fixed-Frequency Pulse Width Modulation-Based Sliding Mode Controller for Switching Multicellular Converter
Authors: Rihab Hamdi, Amel Hadri Hamida, Ouafae Bennis, Fatima Babaa, Sakina Zerouali
Abstract:
This paper features a sliding mode controller (SMC) for closed-loop voltage control of DC-DC three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM). To maintain the switching frequency, the approach is to incorporate a pulse-width modulation that utilizes an equivalent control, inferred by applying the SM control method, to produce a control sign to be contrasted and the fixed-frequency within the modulator. Detailed stability and transient performance analysis have been conducted using Lyapunov stability criteria to restrict the switching frequency variation facing wide variations in output load, input changes, and set-point changes. The results obtained confirm the effectiveness of the proposed control scheme in achieving an enhanced output transient performance while faithfully realizing its control objective in the event of abrupt and uncertain parameter variations. Simulations studies in MATLAB/Simulink environment are performed to confirm the idea.Keywords: DC-DC converter, pulse width modulation, power electronics, sliding mode control
Procedia PDF Downloads 14749786 Evaluation of SCS-Curve Numbers and Runoff across Varied Tillage Methods
Authors: Umar Javed, Kristen Blann, Philip Adalikwu, Maryam Sahraei, John McMaine
Abstract:
The soil conservation service curve number (SCS-CN) is a widely used method to assess direct runoff depth based on specific rainfall events. “Actual” estimated runoff depth was estimated by subtracting the change in soil moisture from the depth of precipitation for each discrete rain event during the growing seasons from 2021 to 2023. Fields under investigation were situated in a HUC-12 watershed in southeastern South Dakota selected for a common soil series (Nora-Crofton complex and Moody-Nora complex) to minimize the influence of soil texture on soil moisture. Two soil moisture probes were installed from May 2021 to October 2023, with exceptions during planting and harvest periods. For each field, “Textbook” CN estimates were derived from the TR-55 table based on corresponding mapped land use land cover LULC class and hydrologic soil groups from web soil survey maps. The TR-55 method incorporated HSG and crop rotation within the study area fields. These textbook values were then compared to actual CN values to determine the impact of tillage practices on CN and runoff. Most fields were mapped as having a textbook C or D HSG, but the HSG of actual CNs was that of a B or C hydrologic group. Actual CNs were consistently lower than textbook CNs for all management practices, but actual CNs in conventionally tilled fields were the highest (and closest to textbook CNs), while actual CNs in no-till fields were the lowest. Preliminary results suggest that no-till practice reduces runoff compared to conventional till. This research highlights the need to use CNs that incorporate agricultural management to more accurately estimate runoff at the field and watershed scale.Keywords: curve number hydrology, hydrologic soil groups, runoff, tillage practices
Procedia PDF Downloads 5049785 Theoretical Discussion on the Classification of Risks in Supply Chain Management
Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite
Abstract:
The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.Keywords: sisks classification, survey, supply chain management, theoretical discussion
Procedia PDF Downloads 63349784 Long-Range Transport of Biomass Burning Aerosols over South America: A Case Study in the 2019 Amazon Rainforest Wildfires Season
Authors: Angel Liduvino Vara-Vela, Dirceu Luis Herdies, Debora Souza Alvim, Eder Paulo Vendrasco, Silvio Nilo Figueroa, Jayant Pendharkar, Julio Pablo Reyes Fernandez
Abstract:
Biomass-burning episodes are quite common in the central Amazon rainforest and represent a dominant source of aerosols during the dry season, between August and October. The increase in the occurrence of fires in 2019 in the world’s largest biomes has captured the attention of the international community. In particular, a rare and extreme smoke-related event occurred in the afternoon of Monday, August 19, 2019, in the most populous city in the Western Hemisphere, the São Paulo Metropolitan Area (SPMA), located in southeastern Brazil. The sky over the SPMA suddenly blackened, with the day turning into night, as reported by several news media around the world. In order to clarify whether or not the smoke that plunged the SPMA into sudden darkness was related to wildfires in the Amazon rainforest region, a set of 48-hour simulations over South America were performed using the Weather Research and Forecasting with Chemistry (WRF-Chem) model at 20 km horizontal resolution, on a daily basis, during the period from August 16 to August 19, 2019. The model results were satisfactorily compared against satellite-based data products and in situ measurements collected from air quality monitoring sites. Although a very strong smoke transport coming from the Amazon rainforest was observed in the middle of the afternoon on August 19, its impact on air quality over the SPMA took place in upper levels far above the surface, where, conversely, low air pollutant concentrations were observed.Keywords: Amazon rainforest, biomass burning aerosols, São Paulo metropolitan area, WRF-Chem model
Procedia PDF Downloads 13949783 Simulated Translator-Client Relations in Translator Training: Translator Behavior around Risk Management
Authors: Maggie Hui
Abstract:
Risk management is not a new concept; however, it is an uncharted area as applied to the translation process and translator training. Risk managers are responsible for managing risk, i.e. adopting strategies with the intention to minimize loss and maximize gains in spite of uncertainty. Which risk strategy to use often depends on the frequency of an event (i.e. probability) and the severity of its outcomes (i.e. impact). This is basically the way translation/localization project managers handle risk management. Although risk management could involve both positive and negative impacts, impact seems to be always negative in professional translators’ management models, e.g. how many days of project time are lost or how many clients are lost. However, for analysis of translation performance, the impact should be possibly positive (e.g. increased readability of the translation) or negative (e.g. loss of source-text information). In other words, the straight business model of risk management is not directly applicable to the study of risk management in the rendition process. This research aims to explore trainee translators’ risk managing while translating in a simulated setting that involves translator-client relations. A two-cycle experiment involving two roles, the translator and the simulated client, was carried out with a class of translation students to test the effects of the main variable of peer-group interaction. The researcher made use of a user-friendly screen-voice recording freeware to record subjects’ screen activities, including every word the translator typed and every change they made to the rendition, the websites they browsed and the reference tools they used, in addition to the verbalization of their thoughts throughout the process. The research observes the translation procedures subjects considered and finally adopted, and looks into the justifications for their procedures, in order to interpret their risk management. The qualitative and quantitative results of this study have some implications for translator training: (a) the experience of being a client seems to reinforce the translator’s risk aversion; (b) there is a wide gap between the translator’s internal risk management and their external presentation of risk; and (c) the use of role-playing simulation can empower students’ learning by enhancing their attitudinal or psycho-physiological competence, interpersonal competence and strategic competence.Keywords: risk management, role-playing simulation, translation pedagogy, translator-client relations
Procedia PDF Downloads 26149782 Effects of the Compressive Eocene Tectonic Phase in the Bou Kornine-Ressas-Messella Structure and Surroundings (Northern Tunisia)
Authors: Aymen Arfaoui, Abdelkader Soumaya
Abstract:
The Messalla-Ressas-Bou Kornine (MRB) and Hammamet Korbous (HK) major trending North-South fault zones provide a good opportunity to show the effects of the Eocene compressive phase in northern Tunisia. They acted as paleogeographical boundaries during the Mesozoic and belonged to a significant strike-slip corridor called the «North-South Axis,» extending from the Saharan platform at the South to the Gulf of Tunis at the North. Our study area is situated in a relay zone between two significant strike-slip faults (HK and MRB), separating the Atlas domain from the Pelagian Block. We used a multidisciplinary approach, including fieldwork, stress inversion, and geophysical profiles, to argue the shortening event that affected the study region. The MRB and HK contractional duplex is a privileged area for a local stress field and stress nucleation. The stress inversion of fault slip data reveals an Eocene compression with NW-SE trending SHmax, reactivating most of the ancient Mesozoic normal faults in the region. This shortening phase is represented in the MRB belt by an angular unconformity between the Upper Eocene over various Cretaceous strata. The stress inversion data reveal a compressive tectonic with an average NW-SE trending Shmax. The major N-S faults are reactivated under this shortening as sinistral oblique faults. The orientation of SHmax deviates from NW-SE to E-W near the preexisting deep faults of MRB and HK. This E-W stress direction generated the emerging overlap of Ressas-Messella and blind thrust faults in the Cretaceous deposits. The connection of the sub-meridian reverse faults in depth creates "flower structures" under an E-W local compressive stress. In addition, we detected a reorientation of the SHmax into an N-S direction in the central part of the MRB - HK contractional duplex, creating E-W reverse faults and overlapping zones. Finally, the Eocene compression constituted the first major tectonic phase which inverted the Mesozoic preexisting extensive fault system in Northern Tunisia.Keywords: Tunisia, eocene compression, tectonic stress field, Bou Kornine-Ressas-Messella
Procedia PDF Downloads 7249781 De-Securitizing Identity: Narrative (In)Consistency in Periods of Transition
Authors: Katerina Antoniou
Abstract:
When examining conflicts around the world, it is evident that the majority of intractable conflicts are steeped in identity. Identity seems to be not only a causal variable for conflict, but also a catalytic parameter for the process of reconciliation that follows ceasefire. This paper focuses on the process of identity securitization that occurs between rival groups of heterogeneous collective identities – ethnic, national or religious – as well as on the relationship between identity securitization and the ability of the groups involved to reconcile. Are securitized identities obstacles to the process of reconciliation, able to hinder any prospects of peace? If the level to which an identity is securitized is catalytic to a conflict’s discourse and settlement, then which factors act as indicators of identity de-securitization? The level of an in-group’s identity securitization can be estimated through a number of indicators, one of which is narrative. The stories, views and stances each in-group adopts in relation to its history of conflict and relation with their rival out-group can clarify whether that specific in-group feels victimized and threatened or safe and ready to reconcile. Accordingly, this study discusses identity securitization through narrative in relation to intractable conflicts. Are there conflicts around the world that, despite having been identified as intractable, stagnated or insoluble, show signs of identity de-securitization through narrative? This inquiry uses the case of the Cyprus conflict and its partitioned societies to present official narratives from the two communities and assess whether these narratives have transformed, indicating a less securitized in-group identity for the Greek and Turkish Cypriots. Specifically, the study compares the official historical overviews presented by each community’s Ministry of Foreign Affairs website and discusses the extent to which the two official narratives present a securitized collective identity. In addition, the study will observe whether official stances by the two communities – as adopted by community leaders – have transformed to depict less securitization over time. Additionally, the leaders’ reflection of popular opinion is evaluated through recent opinion polls from each community. Cyprus is currently experiencing renewed optimism for reunification, with the leaders of its two communities engaging in rigorous negotiations, and with rumors calling for a potential referendum for reunification to be taking place even as early as within 2016. Although leaders’ have shown a shift in their rhetoric and have moved away from narratives of victimization, this is not the case for the official narratives used by their respective ministries of foreign affairs. The study’s findings explore whether this narrative inconsistency proves that Cyprus is transitioning towards reunification, or whether the leaders are risking sending a securitized population to the polls to reject a potential reunification. More broadly, this study suggests that in the event that intractable conflicts might be moving towards viable peace, in-group narratives--official narratives in particular--can act as indicators of the extent to which rival entities have managed to reconcile.Keywords: conflict, identity, narrative, reconciliation
Procedia PDF Downloads 32449780 The Communication Library DIALOG for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
Modern experiments in high energy physics impose great demands on the reliability, the efficiency, and the data rate of Data Acquisition Systems (DAQ). This contribution focuses on the development and deployment of the new communication library DIALOG for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. The iFDAQ utilizing a hardware event builder is designed to be able to readout data at the maximum rate of the experiment. The DIALOG library is a communication system both for distributed and mixed environments, it provides a network transparent inter-process communication layer. Using the high-performance and modern C++ framework Qt and its Qt Network API, the DIALOG library presents an alternative to the previously used DIM library. The DIALOG library was fully incorporated to all processes in the iFDAQ during the run 2016. From the software point of view, it might be considered as a significant improvement of iFDAQ in comparison with the previous run. To extend the possibilities of debugging, the online monitoring of communication among processes via DIALOG GUI is a desirable feature. In the paper, we present the DIALOG library from several insights and discuss it in a detailed way. Moreover, the efficiency measurement and comparison with the DIM library with respect to the iFDAQ requirements is provided.Keywords: data acquisition system, DIALOG library, DIM library, FPGA, Qt framework, TCP/IP
Procedia PDF Downloads 31649779 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review
Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon
Abstract:
The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration
Procedia PDF Downloads 9949778 Enhanced Iron Accumulation in Chickpea Though Expression of Iron-Regulated Transport and Ferritin Genes
Authors: T. M. L. Hoang, G. Tan, S. D. Bhowmik, B. Williams, A. Johnson, M. R. Karbaschi, Y. Cheng, H. Long, S. G. Mundree
Abstract:
Iron deficiency is a worldwide problem affecting both developed and developing countries. Currently, two major approaches namely iron supplementation and food fortification have been used to combat this issue. These measures, however, are limited by the economic status of the targeted demographics. Iron biofortification through genetic modification to enhance the inherent iron content and bioavailability of crops has been employed recently. Several important crops such as rice, wheat, and banana were reported successfully improved iron content via this method, but there is no known study in legumes. Chickpea (Cicer arietinum) is an important leguminous crop that is widely consumed, particularly in India where iron deficiency anaemia is prevalent. Chickpea is also an ideal pulse in the formulation of complementary food between pulses and cereals to improve micronutrient contents. This project aims at generating enhanced ion accumulation and bioavailability chickpea through the exogenous expression of genes related to iron transport and iron homeostasis in chickpea plants. Iron-Regulated Transport (IRT) and Ferritin genes in combination were transformed into chickpea half-embryonic axis by agrobacterium–mediated transformation. Transgenic independent event was confirmed by Southern Blot analysis. T3 leaves and seeds of transgenic chickpea were assessed for iron contents using LA-ICP-MS (Laser Ablation – Inductively Coupled Plasma Mass Spectrometry) and ICP-OES (Inductively Coupled Plasma Optical Emission Spectrometry). The correlation between transgene expression levels and iron content in T3 plants and seeds was assessed using qPCR. Results show that iron content in transgenic chickpea expressing the above genes significantly increased compared to that in non-transgenic controls.Keywords: iron biofortification, chickpea, IRT, ferritin, Agrobacterium-mediated transformation, LA-ICP-MS, ICP-OES
Procedia PDF Downloads 44149777 Teacher’s Role in the Process of Identity Construction in Language Learners
Authors: Gaston Bacquet
Abstract:
The purpose of this research is to explore how language and culture shape a learner’s identity as they immerse themselves in the world of second language learning and how teachers can assist in the process of identity construction within a classroom setting. The study will be conducted as an in-classroom ethnography, using a qualitative methods approach and analyzing students’ experiences as language learners, their degree of investment, inclusion/exclusion, and attitudes, both towards themselves and their social context; the research question the study will attempt to answer is: What kind of pedagogical interventions are needed to help language learners in the process of identity construction so they can offset unequal conditions of power and gain further social inclusion? The following methods will be used for data collection: i) Questionnaires to investigate learners’ attitudes and feelings in different areas divided into four strands: themselves, their classroom, learning English and their social context. ii) Participant observations, conducted in a naturalistic manner. iii) Journals, which will be used in two different ways: on the one hand, learners will keep semi-structured, solicited diaries to record specific events as requested by the researcher (event-contingent). On the other, the researcher will keep his journal to maintain a record of events and situations as they happen to reduce the risk of inaccuracies. iv) Person-centered interviews, which will be conducted at the end of the study to unearth data that might have been occluded or be unclear from the methods above. The interviews will aim at gaining further data on experiences, behaviors, values, opinions, feelings, knowledge and sensory, background and demographic information. This research seeks to understand issues of socio-cultural identities and thus make a significant contribution to knowledge in this area by investigating the type of pedagogical interventions needed to assist language learners in the process of identity construction to achieve further social inclusion. It will also have applied relevance for those working with diverse student groups, especially taking our present social context into consideration: we live in a highly mobile world, with migrants relocating to wealthier, more developed countries that pose their own particular set of challenges for these communities. This point is relevant because an individual’s insight and understanding of their own identity shape their relationship with the world and their ability to continue constructing this relationship. At the same time, because a relationship is influenced by power, the goal of this study is to help learners feel and become more empowered by increasing their linguistic capital, which we hope might result in a greater ability to integrate themselves socially. Exactly how this help will be provided will vary as data is unearthed through questionnaires, focus groups and the actual participant observations being carried out.Keywords: identity construction, second-language learning, investment, second-language culture, social inclusion
Procedia PDF Downloads 10349776 The Renewal of Chinese Urban Village on Cultural Ecology: Hubei Village as an Example
Authors: Shaojun Zheng, Lei Xu, Yunzi Wang
Abstract:
The main purpose of the research is to use the cultural ecology to analyze the renewal of Shenzhen urban village in the process of China's urbanization and to evaluate and guide the renewal, which will combine the society value and economic efficiency and activate urban villages. The urban village has a long history. There are also many old buildings, various residents, and a strong connection with the surrounding environment. Cultural ecology, which uses the knowledge of ecology to study culture, provides us a cultural perspective in the renewal. We take Hubei village in Shenzhen as our example. By using cultural ecology, we find a new way dealing with the relationship between culture and other factors. It helps us to give the buildings and space the culture meanings from different scales. It enables us to find a unique development pattern of urban village. After analyzing several famous cultural blocks cases, we find it is possible to connect the unique culture of urban village with the renovation of its buildings, community, and commerce. We propose the following strategies with specific target: 1. Building renovation: We repair and rebuild the origin buildings as little as possible, and retain the original urban space tissue as much as possible to keep the original sense of place and the cultural atmosphere. 2. Community upgrade: We reshape the village stream, fix the original function, add event which will activate people to complete the existing cultural circle 3. District commerce: We implant food and drink district, boutique commercial, and creative industries, to make full use of the historical atmosphere of the site to enhance the culture feelings For the renewal of a seemingly chaotic mixed urban village, it is important to break out from the conventional practices of building shopping malls or residential towers. Without creating those building landmarks, cultural ecology activates the urban village by exploiting its unique culture, which makes the old and new combine and becomes a new stream of energy, forming the new cultural, commercial and stylish landmark of the city.Keywords: cultural ecology, urban village, renewal, combination
Procedia PDF Downloads 39249775 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 51349774 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.Keywords: censored data, R statistical software, reliability analysis, time to failure
Procedia PDF Downloads 40149773 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas
Abstract:
The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization
Procedia PDF Downloads 52549772 An Automatic Generating Unified Modelling Language Use Case Diagram and Test Cases Based on Classification Tree Method
Authors: Wassana Naiyapo, Atichat Sangtong
Abstract:
The processes in software development by Object Oriented methodology have many stages those take time and high cost. The inconceivable error in system analysis process will affect to the design and the implementation process. The unexpected output causes the reason why we need to revise the previous process. The more rollback of each process takes more expense and delayed time. Therefore, the good test process from the early phase, the implemented software is efficient, reliable and also meet the user’s requirement. Unified Modelling Language (UML) is the tool which uses symbols to describe the work process in Object Oriented Analysis (OOA). This paper presents the approach for automatically generated UML use case diagram and test cases. UML use case diagram is generated from the event table and test cases are generated from use case specifications and Graphic User Interfaces (GUI). Test cases are derived from the Classification Tree Method (CTM) that classify data to a node present in the hierarchy structure. Moreover, this paper refers to the program that generates use case diagram and test cases. As the result, it can reduce work time and increase efficiency work.Keywords: classification tree method, test case, UML use case diagram, use case specification
Procedia PDF Downloads 16249771 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 26649770 The Trigger-DAQ System in the Mu2e Experiment
Authors: Antonio Gioiosa, Simone Doanti, Eric Flumerfelt, Luca Morescalchi, Elena Pedreschi, Gianantonio Pezzullo, Ryan A. Rivera, Franco Spinella
Abstract:
The Mu2e experiment at Fermilab aims to measure the charged-lepton flavour violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. With the expected experimental sensitivity, Mu2e will improve the previous limit of four orders of magnitude. The Mu2e data acquisition (DAQ) system provides hardware and software to collect digitized data from the tracker, calorimeter, cosmic ray veto, and beam monitoring systems. Mu2e’s trigger and data acquisition system (TDAQ) uses otsdaq as its solution. developed at Fermilab, otsdaq uses the artdaq DAQ framework and art analysis framework, under-the-hood, for event transfer, filtering, and processing. Otsdaq is an online DAQ software suite with a focus on flexibility and scalability while providing a multi-user, web-based interface accessible through the Chrome or Firefox web browser. The detector read out controller (ROC) from the tracker and calorimeter stream out zero-suppressed data continuously to the data transfer controller (DTC). Data is then read over the PCIe bus to a software filter algorithm that selects events which are finally combined with the data flux that comes from a cosmic ray veto system (CRV).Keywords: trigger, daq, mu2e, Fermilab
Procedia PDF Downloads 15549769 Electroencephalogram Based Approach for Mental Stress Detection during Gameplay with Level Prediction
Authors: Priyadarsini Samal, Rajesh Singla
Abstract:
Many mobile games come with the benefits of entertainment by introducing stress to the human brain. In recognizing this mental stress, the brain-computer interface (BCI) plays an important role. It has various neuroimaging approaches which help in analyzing the brain signals. Electroencephalogram (EEG) is the most commonly used method among them as it is non-invasive, portable, and economical. Here, this paper investigates the pattern in brain signals when introduced with mental stress. Two healthy volunteers played a game whose aim was to search hidden words from the grid, and the levels were chosen randomly. The EEG signals during gameplay were recorded to investigate the impacts of stress with the changing levels from easy to medium to hard. A total of 16 features of EEG were analyzed for this experiment which includes power band features with relative powers, event-related desynchronization, along statistical features. Support vector machine was used as the classifier, which resulted in an accuracy of 93.9% for three-level stress analysis; for two levels, the accuracy of 92% and 98% are achieved. In addition to that, another game that was similar in nature was played by the volunteers. A suitable regression model was designed for prediction where the feature sets of the first and second game were used for testing and training purposes, respectively, and an accuracy of 73% was found.Keywords: brain computer interface, electroencephalogram, regression model, stress, word search
Procedia PDF Downloads 18749768 The Role of Instruction in Knowledge Construction in Online Learning
Authors: Soo Hyung Kim
Abstract:
Two different learning approaches were suggested: focusing on factual knowledge or focusing on the embedded meaning in the statements. Each way of learning has positive effects on different question categories, where factual knowledge helps more with simple fact questions, and searching for meaning in given information helps learn causal relationship and the embedded meaning. To test this belief, two groups of learners (12 male and 39 female adults aged 18-37) watched a ten-minute long Youtube video about various factual events of American history, their meaning, and the causal relations of the events. The fact group was asked to focus on factual knowledge in the video, and the meaning group was asked to focus on the embedded meaning in the video. After watching the video, both groups took multiple-choice questions, which consisted of 10 questions asking the factual knowledge addressed in the video and 10 questions asking embedded meaning in the video, such as the causal relationship between historical events and the significance of the event. From ANCOVA analysis, it was found that the factual knowledge showed higher performance on the factual questions than the meaning group, although there was no group difference on the questions about the meaning between the two groups. The finding suggests that teacher instruction plays an important role in learners constructing a different type of knowledge in online learning.Keywords: factual knowledge, instruction, meaning-based knowledge, online learning
Procedia PDF Downloads 13449767 Short-Term Effects of an Open Monitoring Meditation on Cognitive Control and Information Processing
Authors: Sarah Ullrich, Juliane Rolle, Christian Beste, Nicole Wolff
Abstract:
Inhibition and cognitive flexibility are essential parts of executive functions in our daily lives, as they enable the avoidance of unwanted responses or selectively switch between mental processes to generate appropriate behavior. There is growing interest in improving inhibition and response selection through brief mindfulness-based meditations. Arguably, open-monitoring meditation (OMM) improves inhibitory and flexibility performance by optimizing cognitive control and information processing. Yet, the underlying neurophysiological processes have been poorly studied. Using the Simon-Go/Nogo paradigm, the present work examined the effect of a single 15-minute smartphone app-based OMM on inhibitory performance and response selection in meditation novices. We used both behavioral and neurophysiological measures (event-related potentials, ERPs) to investigate which subprocesses of response selection and inhibition are altered after OMM. The study was conducted in a randomized crossover design with N = 32 healthy adults. We thereby investigated Go and Nogo trials in the paradigm. The results show that as little as 15 minutes of OMM can improve response selection and inhibition at behavioral and neurophysiological levels. More specifically, OMM reduces the rate of false alarms, especially during Nogo trials regardless of congruency. It appears that OMM optimizes conflict processing and response inhibition compared to no meditation, also reflected in the ERP N2 and P3 time windows. The results may be explained by the meta control model, which argues in terms of a specific processing mode with increased flexibility and inclusive decision-making under OMM. Importantly, however, the effects of OMM were only evident when there was the prior experience with the task. It is likely that OMM provides more cognitive resources, as the amplitudes of these EKPs decreased. OMM novices seem to induce finer adjustments during conflict processing after familiarization with the task.Keywords: EEG, inhibition, meditation, Simon Nogo
Procedia PDF Downloads 21149766 Assessment on the Conduct of Arnis Competition in Pasuc National Olympics 2015: Basis for Improvement of Rules in Competition
Authors: Paulo O. Motita
Abstract:
The Philippine Association of State Colleges and University (PASUC) is an association of State owned and operated higher learning institutions in the Philippines, it is the association that spearhead the conduct of the Annual National Athletic competitions for State Colleges and Universities and Arnis is one of the regular sports. In 2009, Republic Act 9850 also known as declared Arnis as the National Sports and Martial arts of the Philippines. Arnis an ancient Filipino Martial Arts is the major sports in the Annual Palarong Pambansa and other school based sports events. The researcher as a Filipino Martial Arts master and a former athlete desired to determine the extent of acceptability of the arnis rules in competition which serves as the basis for the development of arnis rules. The study aimed to assess the conduct of Arnis competition in PASUC Olympics 2015 in Tugegarao City, Cagayan, Philippines.the rules and conduct itself as perceived by Officiating officials, Coaches and Athletes during the competition last February 7-15, 2015. The descriptive method of research was used, the survey questionnaire as the data gathering instrument was validated. The respondents were composed of 12 Officiating officials, 19 coaches and 138 athletes representing the different regions. Their responses were treated using the Mean, Percentage and One-way Analysis of Variance. The study revealed that the conduct of Arnis competition in PASUC Olympics 2015 was at the low extent to moderate extent as perceived by the three groups of respondents in terms of officiating, scoring and giving violations. Furthermore there is no significant difference in the assessment of the three groups of respondents in the assessment of Anyo and Labanan. Considering the findings of the study, the following conclusions were drawn: 1). There is a need to identify the criteria for judging in Anyo and a tedious scrutiny on the rules of the game for labanan. 2) The three groups of respondents have similar views towards the assessment on the overall competitions for anyo that there were no clear technical guidelines for judging the performance of anyo event. 3). The three groups of respondents have similar views towards the assessment on the overall competitions for labanan that there were no clear technical guidelines for majority rule of giving scores in labanan. 4) The Anyo performance should be rated according to effectiveness of techniques and performance of weapon/s that are being used. 5) On other issues and concern towards the rules of competitions, labanan should be addressed in improving rules of competitions, focus on the applications of majority rules for scoring, players shall be given rest interval, a clear guidelines and set a standard qualifications for officiating officials.Keywords: PASUC Olympics 2015, Arnis rules of competition, Anyo, Labanan, officiating
Procedia PDF Downloads 45849765 The Psychological Impact of Memorials on People: The Case of Northern-Cyprus
Authors: Ma'in Abushaikha
Abstract:
Memorials are usually a landmark could be either an object, sculpture or a statue. They are built for a specific group or person who has died with historical contribution, or it could refer to an important hub, event or a specific culture, therefore to keep past events alive in the common memory through this kind of physical representation in public areas, or even to satisfy the desire to honour something either it is a person who suffered or died during a conflict or just to honour a group of people or even a whole society in a specific character they used to possess during a specific period of time. The aim behind the research is to look more deeply about the importance of memorials placement and environment for more successful outcomes towards people's psychology, therefore, behavior, manners and characteristics, knowing that in the main, they are usually set for function able purposes so people could be involved meaningfully therefore psychologically more than aesthetically. What contribution either positive or negative does memorialization through its physical/urban elements has towards people? Is it towards locals social reconstruction over time including either their understanding to the current conflicts or is it toward their general behavior, manners and characteristics in terms of psychology? And how important Memorial's placement is for the observer? Moreover, how does that either reduces or increases its value, attractiveness, and its effectiveness? This paper considers taking north Cyprus memorials as the main case study, is good enough as a choice to support the research hypothesis where a comparison between deferent memorials is going to be done as the main approach in trying to address the mentioned questions, by that, the research requires field survey in terms of interviewing both dwellers and general observers as well as library survey by viewing similar studies. As a significant result, this research is about to come up assesses how important memorials placements are, in order to apply its impact to the observers, whereas the most successful placed ones have its more effectiveness on observers psychology by time by introducing several mental reflects by this kind of physical representation.Keywords: memorials, placement, environment, impact, psychology, characteristics, manners, behavior
Procedia PDF Downloads 26449764 Urban Resilince and Its Prioritised Components: Analysis of Industrial Township Greater Noida
Authors: N. Mehrotra, V. Ahuja, N. Sridharan
Abstract:
Resilience is an all hazard and a proactive approach, require a multidisciplinary input in the inter related variables of the city system. This research based to identify and operationalize indicators for assessment in domain of institutions, infrastructure and knowledge, all three operating in task oriented community networks. This paper gives a brief account of the methodology developed for assessment of Urban Resilience and its prioritized components for a target population within a newly planned urban complex integrating Surajpur and Kasna village as nodes. People’s perception of Urban Resilience has been examined by conducting questionnaire survey among the target population of Greater Noida. As defined by experts, Urban Resilience of a place is considered to be both a product and process of operation to regain normalcy after an event of disturbance of certain level. Based on this methodology, six indicators are identified that contribute to perception of urban resilience both as in the process of evolution and as an outcome. The relative significance of 6 R’ has also been identified. The dependency factor of various resilience indicators have been explored in this paper, which helps in generating new perspective for future research in disaster management. Based on the stated factors this methodology can be applied to assess urban resilience requirements of a well planned town, which is not an end in itself, but calls for new beginnings.Keywords: disaster, resilience, system, urban
Procedia PDF Downloads 45849763 Tsunami Vulnerability of Critical Infrastructure: Development and Application of Functions for Infrastructure Impact Assessment
Authors: James Hilton Williams
Abstract:
Recent tsunami events, including the 2011 Tohoku Tsunami, Japan, and the 2015 Illapel Tsunami, Chile, have highlighted the potential for tsunami impacts on the built environment. International research in the tsunami impacts domain has been largely focused toward impacts on buildings and casualty estimations, while only limited attention has been placed on the impacts on infrastructure which is critical for the recovery of impacted communities. New Zealand, with 75% of the population within 10 km of the coast, has a large amount of coastal infrastructure exposed to local, regional and distant tsunami sources. To effectively manage tsunami risk for New Zealand critical infrastructure, including energy, transportation, and communications, the vulnerability of infrastructure networks and components must first be determined. This research develops infrastructure asset vulnerability, functionality and repair- cost functions based on international post-event tsunami impact assessment data from technologically similar countries, including Japan and Chile, and adapts these to New Zealand. These functions are then utilized within a New Zealand based impact framework, allowing for cost benefit analyses, effective tsunami risk management strategies and mitigation options for exposed critical infrastructure to be determined, which can also be applied internationally.Keywords: impact assessment, infrastructure, tsunami impacts, vulnerability functions
Procedia PDF Downloads 16149762 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal
Authors: Pedro B. Antunes, Paulo J. Ramísio
Abstract:
Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.Keywords: coastal zones, monitoring, road runoff pollution, salt deposition
Procedia PDF Downloads 23949761 Multiscale Model of Blast Explosion Human Injury Biomechanics
Authors: Raj K. Gupta, X. Gary Tan, Andrzej Przekwas
Abstract:
Bomb blasts from Improvised Explosive Devices (IEDs) account for vast majority of terrorist attacks worldwide. Injuries caused by IEDs result from a combination of the primary blast wave, penetrating fragments, and human body accelerations and impacts. This paper presents a multiscale computational model of coupled blast physics, whole human body biodynamics and injury biomechanics of sensitive organs. The disparity of the involved space- and time-scales is used to conduct sequential modeling of an IED explosion event, CFD simulation of blast loads on the human body and FEM modeling of body biodynamics and injury biomechanics. The paper presents simulation results for blast-induced brain injury coupling macro-scale brain biomechanics and micro-scale response of sensitive neuro-axonal structures. Validation results on animal models and physical surrogates are discussed. Results of our model can be used to 'replicate' filed blast loadings in laboratory controlled experiments using animal models and in vitro neuro-cultures.Keywords: blast waves, improvised explosive devices, injury biomechanics, mathematical models, traumatic brain injury
Procedia PDF Downloads 24949760 Performants: Making the Organization of Concerts Easier
Authors: Ioannis Andrianakis, Panagiotis Panagiotopoulos, Kyriakos Chatzidimitriou, Dimitrios Tampakis, Manolis Falelakis
Abstract:
Live music, whether performed in organized venues, restaurants, hotels or any other spots, creates value chains that support and develop local economies and tourism development. In this paper, we describe PerformAnts, a platform that increases the mobility of musicians and their accessibility to remotely located venues by rationalizing the cost of live acts. By analyzing the event history and taking into account their potential availability, the platform provides bespoke recommendations to both bands and venues while also facilitating the organization of tours and helping rationalize transportation expenses by realizing an innovative mechanism called “chain booking”. Moreover, the platform provides an environment where complicated tasks such as technical and financial negotiations, concert promotion or copyrights are easily manipulated by users using best practices. The proposed solution provides important benefits to the whole spectrum of small/medium size concert organizers, as the complexity and the cost of the production are rationalized. The environment is also very beneficial for local talent, musicians that are very mobile, venues located away from large urban areas or in touristic destinations, and managers who will be in a position to coordinate a larger number of musicians without extra effort.Keywords: machine learning, music industry, creative industries, web applications
Procedia PDF Downloads 9749759 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data
Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan
Abstract:
Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data
Procedia PDF Downloads 441