Search results for: solar panel efficiency technique
577 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology
Authors: Yonggu Jang, Jisong Ryu, Woosik Lee
Abstract:
The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities
Procedia PDF Downloads 62576 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 117575 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics
Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer
Abstract:
Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS
Procedia PDF Downloads 345574 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems
Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme
Abstract:
Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.Keywords: motion capture, cameras, biomechanics, gait analysis
Procedia PDF Downloads 310573 Community Communications and Micro-Level Shifts: The Case of Video Volunteers’ IndiaUnheard Program
Authors: Pooja Ichplani, Archna Kumar, Jessica Mayberry
Abstract:
Community Video (CV) is a participatory medium that has immense potential to strengthen community communications and amplify the voice of people for their empowerment. By building capacities especially of marginalized community groups and providing a platform to freely voice their ideas, CV endeavours to bring about shifts towards more participatory, bottom up development processes and greater power in the hands of the people, especially the disadvantaged. In various parts of the world, among marginalized community groups, community video initiatives have become instrumental in facilitating micro-level, yet significant changes in communities. Video Volunteers (VV) is an organization that promotes community media and works towards providing disadvantaged communities with journalistic, critical thinking and creative skills they need for catalysing change in their communities. Working since 2002, VV has evolved a unique community media model fostering locally-owned and managed media production, as well as building people’s capacities to articulate and share their perspectives on the issues that matter to them – on a local and a global scale. Further, by integrating a livelihood aspect within its model, VV has actively involved people from poor marginalized communities and provided them a new tool for serving their communities whilst keeping their identities intact. This paper, based on a qualitative research, seeks to map the range of VV impacts in communities and provide an in-depth analysis of factors contributing to VV impacting change in communities. Study tools included content analysis of a longitudinal sample of impact videos produced, narratives of community correspondents using the Most Significant Change Technique (MSCT) and interviews with key informants. Using a multi-fold analysis, the paper seeks to gain holistic insights. At the first level, the paper profiles the Community Correspondents (CCs), spearheading change, and maps their personal and social context and their perceptions about VV in their personal lives. Secondly, at an organizational level, the paper maps the significance of impacts brought about in the CCs communities and their association, challenges and achievements while working with VV. Lastly, at the community level, it consists of analysis of the nature of impacts achieved and aspects influencing the same. Finally, the study critiques the functioning of Video Volunteers as a community media initiative using the tipping point theory emphasizing on the power of context that is constituted by their socio-cultural environment. It concludes how empowerment of its Community Correspondents, multifarious activities during pre and post video production, and other innovative mechanisms have enabled in center staging issues of marginalized communities and snowballing processes of change in communities.Keywords: community media, empowerment, participatory communication, social change
Procedia PDF Downloads 137572 Impact of pH Control on Peptide Profile and Antigenicity of Whey Hydrolysates
Authors: Natalia Caldeira De Carvalho, Tassia Batista Pessato, Luis Gustavo R. Fernandes, Ricardo L. Zollner, Flavia Maria Netto
Abstract:
Protein hydrolysates are ingredients of enteral diets and hypoallergenic formulas. Enzymatic hydrolysis is the most commonly used method for reducing the antigenicity of milk protein. The antigenicity and physicochemical characteristics of the protein hydrolysates depend on the reaction parameters. Among them, pH has been pointed out as of the major importance. Hydrolysis reaction in laboratory scale is commonly carried out under controlled pH (pH-stat). However, from the industrial point of view, controlling pH during hydrolysis reaction may be infeasible. This study evaluated the impact of pH control on the physicochemical properties and antigenicity of the hydrolysates of whey proteins with Alcalase. Whey protein isolate (WPI) solutions containing 3 and 7 % protein (w/v) were hydrolyzed with Alcalase 50 and 100 U g-1 protein at 60°C for 180 min. The reactions were carried out under controlled and uncontrolled pH conditions. Hydrolyses performed under controlled pH (pH-stat) were initially adjusted and maintained at pH 8.5. Hydrolyses carried out without pH control were initially adjusted to pH 8.5. Degree of hydrolysis (DH) was determined by OPA method, peptides profile was evaluated by HPLC-RP, and molecular mass distribution by SDS-PAGE/Tricine. The residual α-lactalbumin (α-La) and β-lactoglobulin (β-Lg) concentrations were determined using commercial ELISA kits. The specific IgE and IgG binding capacity of hydrolysates was evaluated by ELISA technique, using polyclonal antibodies obtained by immunization of female BALB/c mice with α-La, β-Lg and BSA. In hydrolysis under uncontrolled pH, the pH dropped from 8.5 to 7.0 during the first 15 min, remaining constant throughout the process. No significant difference was observed between the DH of the hydrolysates obtained under controlled and uncontrolled pH conditions. Although all hydrolysates showed hydrophilic character and low molecular mass peptides, hydrolysates obtained with and without pH control exhibited different chromatographic profiles. Hydrolysis under uncontrolled pH released, predominantly, peptides between 3.5 and 6.5 kDa, while hydrolysis under controlled pH released peptides smaller than 3.5 kDa. Hydrolysis with Alcalase under all conditions studied decreased by 99.9% the α-La and β-Lg concentrations in the hydrolysates detected by commercial kits. In general, β-Lg concentrations detected in the hydrolysates obtained under uncontrolled pH were significantly higher (p<0.05) than those detected in hydrolysates produced with pH control. The anti-α-La and anti-β-Lg IgE and IgG responses to all hydrolysates decreased significantly compared to WPI. Levels of specific IgE and IgG to the hydrolysates were below 25 and 12 ng ml-1, respectively. Despite the differences in peptide composition and α-La and β-Lg concentrations, no significant difference was found between IgE and IgG binding capacity of hydrolysates obtained with or without pH control. These results highlight the impact of pH on the hydrolysates characteristics and their concentrations of antigenic protein. Divergence between the antigen detection by commercial ELISA kits and specific IgE and IgG binding response was found in this study. This result shows that lower protein detection does not imply in lower protein antigenicity. Thus, the use of commercial kits for allergen contamination analysis should be cautious.Keywords: allergy, enzymatic hydrolysis, milk protein, pH conditions, physicochemical characteristics
Procedia PDF Downloads 302571 Corporate Governance and Disclosure Practices of Listed Companies in the ASEAN: A Conceptual Overview
Authors: Chen Shuwen, Nunthapin Chantachaimongkol
Abstract:
Since the world has moved into a transitional period, known as globalization; the business environment is now more complicated than ever before. Corporate information has become a matter of great importance for stakeholders, in order to understand the current situation. As a result of this, the concept of corporate governance has been broadly introduced to manage and control the affairs of corporations while businesses are required to disclose both financial and non-financial information to public via various communication channels such as the annual report, the financial report, the company’s website, etc. However, currently there are several other issues related to asymmetric information such as moral hazard or adverse selection that still occur intensively in workplaces. To prevent such problems in the business, it is required to have an understanding of what factors strengthen their transparency, accountability, fairness, and responsibility. Under aforementioned arguments, this paper aims to propose a conceptual framework that enables an investigation on how corporate governance mechanism influences disclosure efficiency of listed companies in the Association of Southeast Asia Nations (ASEAN) and the factors that should be considered for further development of good behaviors, particularly in regards to voluntary disclosure practices. To achieve its purpose, extensive reviews of literature are applied as a research methodology. It is divided into three main steps. Firstly, the theories involved with both corporate governance and disclosure practices such as agency theory, contract theory, signaling theory, moral hazard theory, and information asymmetry theory are examined to provide theoretical backgrounds. Secondly, the relevant literatures based on multi- perspectives of corporate governance, its attributions and their roles on business processes, the influences of corporate governance mechanisms on business performance, and the factors determining corporate governance characteristics as well as capability are reviewed to outline the parameters that should be included in the proposed model. Thirdly, the well-known regulatory document OECD principles and previous empirical studies on the corporate disclosure procedures are evaluated to identify the similarities and differentiations with the disclosure patterns in the ASEAN. Following the processes and consequences of the literature review, abundant factors and variables are found. Further to the methodology, additional critical factors that also have an impact on the disclosure behaviors are addressed in two groups. In the first group, the factors which are linked to the national characteristics - the quality of national code, legal origin, culture, the level of economic development, and so forth. Whereas in the second group, the discoveries which refer to the firm’s characteristics - ownership concentration, ownership’s rights, controlling group, and so on. However, because of research limitations, only some literature are chosen and summarized to form part of the conceptual framework that explores the relationship between corporate governance and the disclosure practices of listed companies in ASEAN.Keywords: corporate governance, disclosure practice, ASEAN, listed company
Procedia PDF Downloads 192570 Safety and Maternal Anxiety in Mother's and Baby's Sleep: Cross-sectional Study
Authors: Rayanne Branco Dos Santos Lima, Lorena Pinheiro Barbosa, Kamila Ferreira Lima, Victor Manuel Tegoma Ruiz, Monyka Brito Lima Dos Santos, Maria Wendiane Gueiros Gaspar, Luzia Camila Coelho Ferreira, Leandro Cardozo Dos Santos Brito, Deyse Maria Alves Rocha
Abstract:
Introduction: The lack of regulation of the baby's sleep-wake pattern in the first years of life affects the health of thousands of women. Maternal sleep deprivation can trigger or aggravate psychosomatic problems such as depression, anxiety and stress that can directly influence maternal safety, with consequences for the baby's and mother's sleep. Such conditions can affect the family's quality of life and child development. Objective: To correlate maternal security with maternal state anxiety scores and the mother's and baby's total sleep time. Method: Cross-sectional study carried out with 96 mothers of babies aged 10 to 24 months, accompanied by nursing professionals linked to a Federal University in Northeast Brazil. Study variables were maternal security, maternal state anxiety scores, infant latency and sleep time, and total nocturnal sleep time of mother and infant. Maternal safety was calculated using a four-point Likert scale (1=not at all safe, 2=somewhat safe, 3=very safe, 4=completely safe). Maternal anxiety was measured by State-Trait Anxiety Inventory, state-anxiety subscale whose scores vary from 20 to 80 points, and the higher the score, the higher the anxiety levels. Scores below 33 are considered mild; from 33 to 49, moderate and above 49, high. As for the total nocturnal sleep time, values between 7-9 hours of sleep were considered adequate for mothers, and values between 9-12 hours for the baby, according to the guidelines of the National Sleep Foundation. For the sleep latency time, a time equal to or less than 20 min was considered adequate. It is noteworthy that the latency time and the time of night sleep of the mother and the baby were obtained by the mother's subjective report. To correlate the data, Spearman's correlation was used in the statistical package R version 3.6.3. Results: 96 women and babies participated, aged 22 to 38 years (mean 30.8) and 10 to 24 months (mean 14.7), respectively. The average of maternal security was 2.89 (unsafe); Mean maternal state anxiety scores were 43.75 (moderate anxiety). The babies' average sleep latency time was 39.6 min (>20 min). The mean sleep times of the mother and baby were, respectively, 6h and 42min and 8h and 19min, both less than the recommended nocturnal sleep time. Maternal security was positively correlated with maternal state anxiety scores (rh=266, p=0.009) and negatively correlated with infant sleep latency (rh= -0.30. P=0.003). Baby sleep time was positively correlated with maternal sleep time. (rh 0.46, p<0.001). Conclusion: The more secure the mothers considered themselves, the higher the anxiety scores and the shorter the baby's sleep latency. Also, the longer the baby sleeps, the longer the mother sleeps. Thus, interventions are needed to promote the quality and efficiency of sleep for both mother and baby.Keywords: sleep, anxiety, infant, mother-child relations
Procedia PDF Downloads 102569 Simulation Research of the Aerodynamic Drag of 3D Structures for Individual Transport Vehicle
Authors: Pawel Magryta, Mateusz Paszko
Abstract:
In today's world, a big problem of individual mobility, especially in large urban areas, occurs. Commonly used grand way of transport such as buses, trains or cars do not fulfill their tasks, i.e. they are not able to meet the increasing mobility needs of the growing urban population. Additional to that, the limitations of civil infrastructure construction in the cities exist. Nowadays the most common idea is to transfer the part of urban transport on the level of air transport. However to do this, there is a need to develop an individual flying transport vehicle. The biggest problem occurring in this concept is the type of the propulsion system from which the vehicle will obtain a lifting force. Standard propeller drives appear to be too noisy. One of the ideas is to provide the required take-off and flight power by the machine using the innovative ejector system. This kind of the system will be designed through a suitable choice of the three-dimensional geometric structure with special shape of nozzle in order to generate overpressure. The authors idea is to make a device that would allow to cumulate the overpressure using the a five-sided geometrical structure that will be limited on the one side by the blowing flow of air jet. In order to test this hypothesis a computer simulation study of aerodynamic drag of such 3D structures have been made. Based on the results of these studies, the tests on real model were also performed. The final stage of work was a comparative analysis of the results of simulation and real tests. The CFD simulation studies of air flow was conducted using the Star CD - Star Pro 3.2 software. The design of virtual model was made using the Catia v5 software. Apart from the objective to obtain advanced aviation propulsion system, all of the tests and modifications of 3D structures were also aimed at achieving high efficiency of this device while maintaining the ability to generate high value of overpressures. This was possible only in case of a large mass flow rate of air. All these aspects have been possible to verify using CFD methods for observing the flow of the working medium in the tested model. During the simulation tests, the distribution and size of pressure and velocity vectors were analyzed. Simulations were made with different boundary conditions (supply air pressure), but with a fixed external conditions (ambient temp., ambient pressure, etc.). The maximum value of obtained overpressure is 2 kPa. This value is too low to exploit the power of this device for the individual transport vehicle. Both the simulation model and real object shows a linear dependence of the overpressure values obtained from the different geometrical parameters of three-dimensional structures. Application of computational software greatly simplifies and streamlines the design and simulation capabilities. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aviation propulsion, CFD, 3d structure, aerodynamic drag
Procedia PDF Downloads 310568 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment
Authors: Ryan C. Igama
Abstract:
The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response
Procedia PDF Downloads 94567 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 240566 Puereria mirifica Replacement Improves Skeletal Muscle Performance Associated with Increasing Parvalbumin Levels in Ovariectomized Rat
Authors: Uraporn Vongvatcharanon, Kochakorn Sukjan, Wandee Udomuksorn, Ekkasit Kumarnsit, Surapong Vongvatcharanon
Abstract:
Sarcopenia is a loss of muscle mass, and strength frequently found in menopause. Estrogen replacement has been shown to improve such a loss of muscle functions. However, there is an increased risk of cancer that has to be considered because of the estrogen replacement therapy. Thus, phytoestrogen supplementation has been suggested as an alternative therapy. Pueraria mirifica (PM) is a plant in the family Leguminosae, that is known to be phytoestrogen-rich and has been traditionally used for the treatment of menopausal symptoms. It contains isoflavones and other compounds such as miroestrol and its derivatives. Parvalbumin (PV) is a calcium binding protein and functions as a relaxing factor in fast twitch muscle fibers. A decrease of the PV level results in a reduction of the speed of the twitch relaxation. Therefore, this study aimed to investigate the effect of an ethanolic extract from Pueraria mirifica on the estrogen levels, skeletal muscle functions and PV levels in the extensor digitorum longus (EDL) and gastrocnemius of ovariectomized rats. Twelve-week old female Wistar rats (200-250 g) were divided into 6 groups: SHAM (un-ovariectomized rats, that received double distilled water), PM-0 (ovariectomized rats, OVX, receiving double distilled water), E (OVX, receiving an estradiol benzoate dose of 0.04 mg/kg), PM-50 (OVX receiving PM 50 mg/kg), PM-500 (OVX receiving PM 500 mg/kg), PM-1000 (OVX receiving PM 1000 mg/kg) all for 90 days. The PM-0 group had estrogen levels, uterus weights, muscle mass, myofiber cross-section areas, peak tension, fatigue resistance, speed of relaxation and parvalbumin levels of both EDL and gastrocnemius that were significantly reduced compared to those of the SHAM group (p<0.05). Also the α and β estrogen receptor immunoreactivities and the parvalbumin immunoreactivities of both EDL and gastrocnemius were decreased in the PM-0 group. In contrast the E, PM-50, PM-500 and PM-1000 group had estrogen levels, uterus weights, muscle mass, myofiber cross-section areas, peak tension, fatigue resistance, speed of relaxation of both EDL and gastrocnemius that were significantly increased compared with PM-0 group (p<0.05). In addition, the α and β estrogen receptor immunoreactivities and parvalbumin immunoreactivity of both the EDL and gastrocnemius were increased in the E, PM-50, PM-500 and PM-1000 group. In addition the extract of Pueraria mirifica replacement group at 50 and 500 mg/kg had significantly increased parvalbumin levels in the EDL muscle but in the gastrocnemius, only the dose of 500 mg/kg increased the parvalbumin levels (p<0.05). These results have demonstrated that the use of the Pueraria mirifica extract as a replacement therapy for estrogen produced estrogenic activity that was similar to that produced by the estradiol benzoate replacement. It seems that the phytoestrogens could bind with the estrogen receptors and stimulate the transcriptional activity to synthesise muscle protein that caused an increase in muscle mass and parvalbumin levels. Thus, muscle synthesis may restore parvalbumin levels resulting in an enhanced relaxation efficiency that would lead to a shortened latent period before the next contraction.Keywords: Puereria mirifica, Parvalbumin, estrogen, ovariectomized rats
Procedia PDF Downloads 382565 Investigation into the Socio-ecological Impact of Migration of Fulani Herders in Anambra State of Nigeria Through a Climate Justice Lens
Authors: Anselm Ego Onyimonyi, Maduako Johnpaul O.
Abstract:
The study was designed to investigate into the socio-ecological impact of migration of Fulani herders in Anambra state of Nigeria, through a climate justice lens. Nigeria is one of the world’s most densely populated countries with a population of over 284 million people, half of which are considered to be in abject poverty. There is no doubt that livestock production provides sustainable contributions to food security and poverty reduction to Nigeria economy, but not without some environmental implications like any other economic activities. Nigeria is recognized as being vulnerable to climate change. Climate change and global warming if left unchecked will cause adverse effects on livelihoods in Nigeria, such as livestock production, crop production, fisheries, forestry and post-harvest activities, because the rainfall regimes and patterns will be altered, floods which devastate farmlands would occur, increase in temperature and humidity which increases pest and disease would occur and other natural disasters like desertification, drought, floods, ocean and storm surges, which not only damage Nigerians’ livelihood but also cause harm to life and property, would occur. This and other climatic issue as it affects Fulani herdsmen was what this study investigated. In carrying out this research, a survey research design was adopted. A simple sampling technique was used. One local government area (LGA) was selected purposively from each of the four agricultural zone in the state based on its predominance of Fulani herders. For appropriate sampling, 25 respondents from each of the four Agricultural zones in the state were randomly selected making up the 100 respondent being sampled. Primary data were generated by using a set of structured 5-likert scale questionnaire. Data generated were analyzed using SPSS and the result presented using descriptive statistics. From the data analyzed, the study indentified; Unpredicted rainfall (mean = 3.56), Forest fire (mean = 4.63), Drying Water Source (mean = 3.99), Dwindling Grazing (mean 4.43), Desertification (mean = 4.44), Fertile land scarcity (mean = 3.42) as major factor predisposing Fulani herders to migrate southward while rejecting Natural inclination to migrate (mean = 2.38) and migration to cause trouble as a factor. On the reason why Fulani herders are trying to establish a permanent camp in Anambra state; Moderate temperature (mean= 3.60), Avoiding overgrazing (4.42), Search for fodder and water (mean = 4.81) and (mean = 4.70) respectively, Need for market (4.28), Favorable environment (mean = 3.99) and Access to fertile land (3.96) were identified. It was concluded that changing climatic variables necessitated the migration of herders from Northern Nigeria to areas in the South were the variables are most favorable to the herders and their animals.Keywords: socio-ecological, migration, fulani, climate, justice, lens
Procedia PDF Downloads 44564 Identification and Quantification of Lisinopril from Pure, Formulated and Urine Samples by Micellar Thin Layer Chromatography
Authors: Sudhanshu Sharma
Abstract:
Lisinopril, 1-[N-{(s)-I-carboxy-3 phenyl propyl}-L-proline dehydrate is a lysine analog of enalaprilat, the active metabolite of enalapril. It is long-acting, non-sulhydryl angiotensin-converting enzyme (ACE) inhibitor that is used for the treatment of hypertension and congestive heart failure in daily dosage 10-80 mg. Pharmacological activity of lisinopril has been proved in various experimental and clinical studies. Owing to its importance and widespread use, efforts have been made towards the development of simple and reliable analytical methods. As per our literature survey, lisinopril in pharmaceutical formulations has been determined by various analytical methodologies like polaragraphy, potentiometry, and spectrophotometry, but most of these analytical methods are not too suitable for the Identification of lisinopril from clinical samples because of the interferences caused by the amino acids and amino groups containing metabolites present in biological samples. This report is an attempt in the direction of developing a simple and reliable method for on plate identification and quantification of lisinopril in pharmaceutical formulations as well as from human urine samples using silica gel H layers developed with a new mobile phase comprising of micellar solutions of N-cetyl-N, N, N-trimethylammonium bromide (CTAB). Micellar solutions have found numerous practical applications in many areas of separation science. Micellar liquid chromatography (MLC) has gained immense popularity and wider applicability due to operational simplicity, cost effectiveness, relatively non-toxicity and enhanced separation efficiency, low aggressiveness. Incorporation of aqueous micellar solutions as mobile phase was pioneered by Armstrong and Terrill as they accentuated the importance of TLC where simultaneous separation of ionic or non-ionic species in a variety of matrices is required. A peculiarity of the micellar mobile phases (MMPs) is that they have no macroscopic analogues, as a result the typical separations can be easily achieved by using MMPs than aqueous organic mobile phases. Previously MMPs were successfully employed in TLC based critical separations of aromatic hydrocarbons, nucleotides, vitamin K1 and K5, o-, m- and p- aminophenol, amino acids, separation of penicillins. The human urine analysis for identification of selected drugs and their metabolites has emerged as an important investigation tool in forensic drug analysis. Among all chromatographic methods available only thin layer chromatography (TLC) enables a simple fast and effective separation of the complex mixtures present in various biological samples and is recommended as an approved testing for forensic drug analysis by federal Law. TLC proved its applicability during successful separation of bio-active amines, carbohydrates, enzymes, porphyrins, and their precursors, alkaloid and drugs from urine samples.Keywords: lisnopril, surfactant, chromatography, micellar solutions
Procedia PDF Downloads 367563 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 127562 Investigating Income Diversification Strategies into Off-Farm Activities Among Rural Households in Ethiopia
Authors: Kibret Berhanu Getinet
Abstract:
Off-farm income diversification by farm rural households has gained the attention of researchers and policymakers due to the fact that agriculture failed to meet the needs of people in developing countries like Ethiopia. The objective of this study was to investigate income diversification strategies into off-farm activities among rural households in Hawassa Zuria Woreda, Sidama National Regional State, Ethiopia. The study used primary and secondary data sources for the primary data collection questionnaire employed as a data collection instrument. A multistage sampling technique was used to collect data from a total of 197 sample households from four kebeles of the study area. Descriptive statistics, as well as econometrics methods of data analysis, were employed. The descriptive statistics result indicates that the majority of sample rural households (68.53 %) have engaged in off-farm income diversification activities while the remaining 31.47% of households did not participate in the diversification in the study area. The choice of participants among the strategies indicates that 6.60% of respondents participated in off-farm wage employment, 30.46% participated in off-farm self-employment, and about 31.47% of them participated in both off-farm wage employment. The study revealed that the share of off-farm income in total annual earnings of households was about 48.457%, and thus, the off-farm diversification significantly contributes to the rural household income. Moreover, binary and multinomial logistic regression models were employed to identify factors that affect the participation and the choices of the off-farm income diversification strategies, respectively. The binary logit model result indicated that agro-ecological zone, education status of the households, available technical skills of the household, household saving, total livestock owned by the households, access to electricity, road access and being married of household head were significant and positively affected the chance of diversification in off-farm activities while the on-farm income of households is negatively affected the chance of diversification. Similarly, the multinomial logistic regression model estimate revealed that agroecological zone, on-farm income, available technical skills, household savings, and access to electricity are positively related and significantly influenced the household’s choice of employment into off-farm wage employment. The off-farm self-employment diversification choice is significantly influenced by on-farm income, available technical skills, household savings, total livestock owned, and access to electricity. Moreover, the result showed that the factors that affect the choice of farm households to engage in both off-farm wage and self-employment are ecological zone, education status, on-farm income, available technical skills, household own saving, market access, total livestock owned, access to electricity and road access. Thus, due attention should be given to addressing the demographic, socio-economic, and institutional constraints to strengthen off-farm income diversification strategies to improve the income of rural households.Keywords: off-farm, incoem, diversification, logit model
Procedia PDF Downloads 55561 Mindfulness and the Purpose of Being in the Present
Authors: Indujeeva Keerthila Peiris
Abstract:
The secular view of mindfulness has some connotation to the original meaning of mindfulness mentioned in the Theravada Buddhist texts (Pāli Canon), but there is a substantial difference in the meaning of the two. Secular Mindfulness Based Interventions (MBI) focus on stilling the mind, which may provide short-term benefits and help individuals to deal with physical pain, grief, and distress. However, as with many popular educational innovations, the foundational values of mindfulness strategies have been distorted and subverted in a number of instances in which ‘McMindfulness’ programmes have been implemented with a view to reducing mindfulness mediation as a self-help technique that is easily misappropriated for the exclusive pursuit of corporate objectives, employee pacification, and commercial profit. The intention of this paper is not to critique the misappropriations of mindfulness. Instead, to go back to the root source and bring insights from the Buddhist Pāli Canon and its associated teachings on mindfulness in its own terms. In the Buddha’s discourses, as preserved in the Pāli Canon, there is nothing more significant than the understanding and practice of ‘Satipatthãna’. The Satipatthāna Sutta , the ‘Discourse on the Establishment of Mindfulness,’ opens with a proclamation highlighting both the purpose of this training and its methodology. The right practice of mindfulness is the gateway to understanding the Buddha’s teaching. However, although this concept is widely discussed among the Dhamma practitioners, it is the least understood one of them all. The purpose of this paper is to understand deeper meaning of mindfulness as it was originally intended by the Teacher. The natural state of mind is that it wanders. It wanders into the past, the present, and the future. One’s ability to hold attention to a mind object (emotion, thought, feeling, sensation, sense impression) called ‘concentration’. The intentional concentration process does not lead to wisdom. However, the development of wisdom starts when the mind is calm, concentrated, and unified. The practice of insight contemplation aims at gaining a direct understanding of the real nature of phenomena. According to the Buddha’s teaching, there are three basic facts of all existence: 1) impermanence (anicca in Pāli) ; 2) fabrication (also commonly known as suffering, unsatisfactoriness, sankhara or dukka in Pāli); 3) not-self (insubstantiality or impersonality, annatta in Pāli ). The entire Buddhist doctrine is based on these three facts. The problem is our ignorance covers reality. It is not that a person sees the emptiness of them or that we try to see the emptiness of our experience by conceptually thinking that they are empty. It is an experiential outcome that happens when the cause-and- effect overrides the self-view (sakkaya dhitti), and ignorance is known as ignorance and eradicated once and for all. Therefore, the right view (samma dhitti) is the starting point of the path, not ethical conduct (sila) or samadhi (jhana). In order to develop the right view, we need to first listen to the correct Dhamma and possess Yoniso manasikara (right comprehension) to know the five aggregates as five aggregates.Keywords: mindfulness, spirituality, buddhism, pali canon
Procedia PDF Downloads 76560 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 379559 Comparative Study for Neonatal Outcome and Umbilical Cord Blood Gas Parameters in Balanced and Inhalant Anesthesia for Elective Cesarean Section in Dogs
Authors: Agnieszka Antończyk, MałGorzata Ochota, Wojciech Niżański, ZdzisłAw Kiełbowicz
Abstract:
The goal of the cesarean section (CS) is the delivery of healthy, vigorous pups with the provision of surgical plane anesthesia, appropriate analgesia, and rapid recovery of the dam. In human medicine, spinal or epidural anesthesia is preferred for a cesarean section as associated with a lower risk of neonatal asphyxia and the need for resuscitation. Nevertheless, the specificity of veterinary patients makes the application of regional anesthesia as a sole technique impractical, thus to obtain patient compliance the general anesthesia is required. This study aimed to compare the influence of balanced (inhalant with epidural) and inhalant anesthesia on neonatal umbilical cord blood gas (UCBG) parameters and vitality (modified Apgar scoring). The bitches (31) undergoing elective CS were enrolled in this study. All females received a single dose of 0.2 mg/kg s.c. Meloxicam. Females were randomly assigned into two groups: Gr I (Isoflurane, n=16) and Gr IE (Isoflurane plus Epidural, n=15). Anesthesia was induced with propofol at 4-6 mg/kg to effect, and maintained with isoflurane in oxygen; in IE group epidural anesthesia was also done using lidocaine (3-4 mg/kg) into the lumbosacral space. CSs were performed using a standard mid-line approach. Directly after the puppy extraction, the umbilical cord was double clamped before the placenta detachment. The vessels were gently stretched between forceps to allow blood sampling. At least 100 mcl of mixed umbilical cord blood was collected into a heparinized syringe for further analysis. The modified Apgar scoring system (AS) was used to objectively score neonatal health and vitality immediately after birth (before first aid or neonatal care was instituted), at 5 and 20 min after birth. The neonates were scored as normal (AS 7-10), weak (AS 4-6), or critical (AS 0-3). During surgery, the IE group required a lower isoflurane concentration compared to the females in group I (MAC 1.05±0.2 and 1.4±0.13, respectively, p<0.01). All investigated UCBG parameters were not statistically different between groups. All pups had mild acidosis (pH 7.21±0.08 and 7.21±0.09 in Gr I and IE, respectively) with moderately elevated pCO2 (Gr I 57.18±11.48, Gr IE 58.74±15.07), HCO3- on the lower border (Gr I 22.58±3.24, Gr IE 22.83±3.6), lowered BE (Gr I -6.1±3.57, Gr IE -5.6±4.19) and mildly elevated level of lactates (Gr I 2.58±1.48, Gr IE2.53±1.03). The glucose levels were above the reference limits in both groups of puppies (74.50±25.32 in Gr I, 79.50±29.73 in Gr IE). The initial Apgar score results were similar in I and IE groups. However, the subsequent measurements of AS revealed significant differences between both groups. Puppies from the IE group received better AS scores at 5 and 20 min compared to the I group (6.86±2.23 and 8.06±2.06 vs 5.11±2.40 and 7.83±2.05, respectively). The obtained results demonstrated that administration of epidural anesthesia reduced the requirement for isoflurane in dams undergoing cesarean section and did not affect the neonatal umbilical blood gas results. Moreover, newborns from the epidural anesthesia group were scored significantly higher in AS at 5 and 20 min, indicating their better vitality and quicker improvement post-surgery.Keywords: apgar scoring, balanced anesthesia, cesarean section, umbilical blood gas
Procedia PDF Downloads 177558 Magnetofluidics for Mass Transfer and Mixing Enhancement in a Micro Scale Device
Authors: Majid Hejazian, Nam-Trung Nguyen
Abstract:
Over the past few years, microfluidic devices have generated significant attention from industry and academia due to advantages such as small sample volume, low cost and high efficiency. Microfluidic devices have applications in chemical, biological and industry analysis and can facilitate assay of bio-materials and chemical reactions, separation, and sensing. Micromixers are one of the important microfluidic concepts. Micromixers can work as stand-alone devices or be integrated in a more complex microfluidic system such as a lab on a chip (LOC). Micromixers are categorized as passive and active types. Passive micromixers rely only on the arrangement of the phases to be mixed and contain no moving parts and require no energy. Active micromixers require external fields such as pressure, temperature, electric and acoustic fields. Rapid and efficient mixing is important for many applications such as biological, chemical and biochemical analysis. Achieving fast and homogenous mixing of multiple samples in the microfluidic devices has been studied and discussed in the literature recently. Improvement in mixing rely on effective mass transport in microscale, but are currently limited to molecular diffusion due to the predominant laminar flow in this size scale. Using magnetic field to elevate mass transport is an effective solution for mixing enhancement in microfluidics. The use of a non-uniform magnetic field to improve mass transfer performance in a microfluidic device is demonstrated in this work. The phenomenon of mixing ferrofluid and DI-water streams has been reported before, but mass transfer enhancement for other non-magnetic species through magnetic field have not been studied and evaluated extensively. In the present work, permanent magnets were used in a simple microfluidic device to create a non-uniform magnetic field. Two streams are introduced into the microchannel: one contains fluorescent dye mixed with diluted ferrofluid to induce enhanced mass transport of the dye, and the other one is a non-magnetic DI-water stream. Mass transport enhancement of fluorescent dye is evaluated using fluorescent measurement techniques. The concentration field is measured for different flow rates. Due to effect of magnetic field, a body force is exerted on the paramagnetic stream and expands the ferrofluid stream into non-magnetic DI-water flow. The experimental results demonstrate that without a magnetic field, both magnetic nanoparticles of the ferrofluid and the fluorescent dye solely rely on molecular diffusion to spread. The non-uniform magnetic field, created by the permanent magnets around the microchannel, and diluted ferrofluid can improve mass transport of non-magnetic solutes in a microfluidic device. The susceptibility mismatch between the fluids results in a magnetoconvective secondary flow towards the magnets and subsequently the mass transport of the non-magnetic fluorescent dye. A significant enhancement in mass transport of the fluorescent dye was observed. The platform presented here could be used as a microfluidics-based micromixer for chemical and biological applications.Keywords: ferrofluid, mass transfer, micromixer, microfluidics, magnetic
Procedia PDF Downloads 225557 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 309556 How to Assess the Attractiveness of Business Location According to the Mainstream Concepts of Comparative Advantages
Authors: Philippe Gugler
Abstract:
Goal of the study: The concept of competitiveness has been addressed by economic theorists and policymakers for several hundreds of years, with both groups trying to understand the drivers of economic prosperity and social welfare. The goal of this contribution is to address the major useful theoretical contributions that permit to identify the main drivers of a territory’s competitiveness. We first present the major contributions found in the classical and neo-classical theories. Then, we concentrate on two majors schools providing significant thoughts on the competitiveness of locations: the Economic Geography (EG) School and the International Business (IB) School. Methodology: The study is based on a literature review of the classical and neo-classical theories, on the economic geography theories and on the international business theories. This literature review establishes links between these theoretical mainstreams. This work is based on the academic framework establishing a meaningful literature review aimed to respond to our research question and to develop further research in this field. Results: The classical and neo-classical pioneering theories provide initial insights that territories are different and that these differences explain the discrepancies in their levels of prosperity and standards of living. These theories emphasized different factors impacting the level and the growth of productivity in a given area and therefore the degree of their competitiveness. However, these theories are not sufficient to more precisely identify the drivers and enablers of location competitiveness and to explain, in particular, the factors that drive the creation of economic activities, the expansion of economic activities, the creation of new firms and the attraction of foreign firms. Prosperity is due to economic activities created by firms. Therefore, we need more theoretical insights to scrutinize the competitive advantages of territories or, in other words, their ability to offer the best conditions that enable economic agents to achieve higher rates of productivity in open markets. Two major theories provide, to a large extent, the needed insights: the economic geography theory and the international business theory. The economic geography studies scrutinized in this study from Marshall to Porter, aim to explain the drivers of the concentration of specific industries and activities in specific locations. These activity agglomerations may be due to the creation of new enterprises, the expansion of existing firms, and the attraction of firms located elsewhere. Regarding this last possibility, the international business (IB) theories focus on the comparative advantages of locations as far as multinational enterprises (MNEs) strategies are concerned. According to international business theory, the comparative advantages of a location serves firms not only by exploiting their ownership advantages (mostly as far as market seeking, resource seeking and efficiency seeking investments are concerned) but also by augmenting and/or creating new ownership advantages (strategic asset seeking investments). The impact of a location on the competitiveness of firms is considered from both sides: the MNE’s home country and the MNE’s host country.Keywords: competitiveness, economic geography, international business, attractiveness of businesses
Procedia PDF Downloads 156555 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 132554 Examining the Effect of Online English Lessons on Nursery School Children
Authors: Hidehiro Endo, Taizo Shigemichi
Abstract:
Introduction & Objectives: In 2008, the revised course of study for elementary schools was published by MEXT, and from the beginning of the academic year of 2011-2012, foreign language activities (English lessons) became mandatory for 5th and 6th graders in Japanese elementary schools. Foreign language activities are currently offered once a week for approximately 50 minutes by elementary school teachers, assistant language teachers who are native speakers of English, volunteers, among others, with the purpose of helping children become accustomed to functional English. However, the new policy has disclosed a myriad of issues in conducting foreign language activities since the majority of the current elementary school teachers has neither English teaching experience nor English proficiency. Nevertheless, converting foreign language activities into English, as a subject in Japanese elementary schools (for 5th and 6th graders) from 2020 is what MEXT currently envisages with the purpose of reforming English education in Japan. According to their new proposal, foreign language activities will be mandatory for 3rd and 4th graders from 2020. Consequently, gaining better access to English learning opportunities becomes one of the primary concerns even in early childhood education. Thus, in this project, we aim to explore some nursery schools’ attempts at providing toddlers with online English lessons via Skype. The main purpose of this project is to look deeply into what roles online English lessons in the nursery schools play in guiding nursery school children to enjoy learning the English language as well as to acquire English communication skills. Research Methods: Setting; The main research site is a nursery school located in the northern part of Japan. The nursery school has been offering a 20-minute online English lesson via Skype twice a week to 7 toddlers since September 2015. The teacher of the online English lessons is a male person who lives in the Philippines. Fieldwork & Data; We have just begun collecting data by attending the Skype English lessons. Direct observations are the principal components of the fieldwork. By closely observing how the toddlers respond to what the teacher does via Skype, we examine what components stimulate the toddlers to pay attention to the English lessons. Preliminary Findings & Expected Outcomes: Although both data collection and analysis are ongoing, we found that the online English teacher remembers the first name of each toddler and calls them by their first name via Skype, a technique that is crucial in motivating the toddlers to actively participate in the lessons. In addition, when the teacher asks the toddlers the name of a plastic object such as grapes in English, the toddlers tend to respond to the teacher in Japanese. Accordingly, the effective use of Japanese in teaching English for nursery school children need to be further examined. The anticipated results of this project are an increased recognition of the significance of creating English language learning opportunities for nursery school children and a significant contribution to the field of early childhood education.Keywords: teaching children, English education, early childhood education, nursery school
Procedia PDF Downloads 329553 Entrepreneurial Dynamism and Socio-Cultural Context
Authors: Shailaja Thakur
Abstract:
Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs
Procedia PDF Downloads 258552 Preparation, Solid State Characterization of Etraverine Co-Crystals with Improved Solubility for the Treatment of Human Immunodeficiency Virus
Authors: B. S. Muddukrishna, Karthik Aithal, Aravind Pai
Abstract:
Introduction: Preparation of binary cocrystals of Etraverine (ETR) by using Tartaric Acid (TAR) as a conformer was the main focus of this study. Etravirine is a Class IV drug, as per the BCS classification system. Methods: Cocrystals were prepared by slow evaporation technique. A mixture of total 500mg of ETR: TAR was weighed in molar ratios of 1:1 (371.72mg of ETR and 128.27mg of TAR). Saturated solution of Etravirine was prepared in Acetone: Methanol (50:50) mixture in which tartaric acid is dissolved by sonication and then this solution was stirred using a magnetic stirrer until the solvent got evaporated. Shimadzu FTIR – 8300 system was used to acquire the FTIR spectra of the cocrystals prepared. Shimadzu thermal analyzer was used to achieve DSC measurements. X-ray diffractometer was used to obtain the X-ray powder diffraction pattern. Shake flask method was used to determine the equilibrium dynamic solubility of pure, physical mixture and cocrystals of ETR. USP buffer (pH 6.8) containing 1% of Tween 80 was used as the medium. The pure, physical mixture and the optimized cocrystal of ETR were accurately weighed sufficient to maintain the sink condition and were filled in hard gelatine capsules (size 4). Electrolab-Tablet Dissolution tester using basket apparatus at a rotational speed of 50 rpm and USP phosphate buffer (900 mL, pH = 6.8, 37 ˚C) + 1% Tween80 as a media, was used to carry out dissolution. Shimadzu LC-10 series chromatographic system was used to perform the analysis with PDA detector. An Hypersil BDS C18 (150mm ×4.6 mm ×5 µm) column was used for separation with mobile phase comprising of a mixture of ace¬tonitrile and phosphate buffer 20mM, pH 3.2 in the ratio 60:40 v/v. The flow rate was 1.0mL/min and column temperature was set to 30°C. The detection was carried out at 304 nm for ETR. Results and discussions: The cocrystals were subjected to various solid state characterization and the results confirmed the formation of cocrystals. The C=O stretching vibration (1741cm-1) in tartaric acid was disappeared in the cocrystal and the peak broadening of primary amine indicates hydrogen bond formation. The difference in the melting point of cocrystals when compared to pure Etravirine (265 °C) indicates interaction between the drug and the coformer which proves that first ordered transformation i.e. melting endotherm has disappeared. The difference in 2θ values of pure drug and cocrystals indicates the interaction between the drug and the coformer. Dynamic solubility and dissolution studies were also conducted by shake flask method and USP apparatus one respectively and 3.6 fold increase in the dynamic solubility were observed and in-vitro dissolution study shows four fold increase in the solubility for the ETR: TAR (1:1) cocrystals. The ETR: TAR (1:1) cocrystals shows improved solubility and dissolution as compared to the pure drug which was clearly showed by solid state characterization and dissolution studies.Keywords: dynamic solubility, Etraverine, in vitro dissolution, slurry method
Procedia PDF Downloads 357551 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements
Authors: Mohammad R. Bhuyan, Mohammad J. Khattak
Abstract:
Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement
Procedia PDF Downloads 166550 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 5549 High Strain Rate Behavior of Harmonic Structure Designed Pure Nickel: Mechanical Characterization Microstructure Analysis and 3D Modelisation
Authors: D. Varadaradjou, H. Kebir, J. Mespoulet, D. Tingaud, S. Bouvier, P. Deconick, K. Ameyama, G. Dirras
Abstract:
The development of new architecture metallic alloys with controlled microstructures is one of the strategic ways for designing materials with high innovation potential and, particularly, with improved mechanical properties as required for structural materials. Indeed, unlike conventional counterparts, metallic materials having so-called harmonic structure displays strength and ductility synergy. The latter occurs due to a unique microstructure design: a coarse grain structure surrounded by a 3D continuous network of ultra-fine grain known as “core” and “shell,” respectively. In the present study, pure harmonic-structured (HS) Nickel samples were processed via controlled mechanical milling and followed by spark plasma sintering (SPS). The present work aims at characterizing the mechanical properties of HS pure Nickel under room temperature dynamic loading through a Split Hopkinson Pressure Bar (SHPB) test and the underlying microstructure evolution. A stopper ring was used to maintain the strain at a fixed value of about 20%. Five samples (named B1 to B5) were impacted using different striker bar velocities from 14 m/s to 28 m/s, yielding strain rate in the range 4000-7000 s-1. Results were considered until a 10% deformation value, which is the deformation threshold for the constant strain rate assumption. The non-deformed (INIT – post-SPS process) and post-SHPB microstructure (B1 to B5) were investigated by EBSD. It was observed that while the strain rate is increased, the average grain size within the core decreases. An in-depth analysis of grains and grain boundaries was made to highlight the thermal (such as dynamic recrystallization) or mechanical (such as grains fragmentation by dislocation) contribution within the “core” and “shell.” One of the most widely used methods for determining the dynamic behavior of materials is the SHPB technique developed by Kolsky. A 3D simulation of the SHPB test was created through ABAQUS in dynamic explicit. This 3D simulation allows taking into account all modes of vibration. An inverse approach was used to identify the material parameters from the equation of Johnson-Cook (JC) by minimizing the difference between the numerical and experimental data. The JC’s parameters were identified using B1 and B5 samples configurations. Predictively, identified parameters of JC’s equation shows good result for the other sample configuration. Furthermore, mean rise of temperature within the harmonic Nickel sample can be obtained through ABAQUS and show an elevation of about 35°C for all fives samples. At this temperature, a thermal mechanism cannot be activated. Therefore, grains fragmentation within the core is mainly due to mechanical phenomena for a fixed final strain of 20%.Keywords: 3D simulation, fragmentation, harmonic structure, high strain rate, Johnson-cook model, microstructure
Procedia PDF Downloads 231548 Influence of Recycled Concrete Aggregate Content on the Rebar/Concrete Bond Properties through Pull-Out Tests and Acoustic Emission Measurements
Authors: L. Chiriatti, H. Hafid, H. R. Mercado-Mendoza, K. L. Apedo, C. Fond, F. Feugeas
Abstract:
Substituting natural aggregate with recycled aggregate coming from concrete demolition represents a promising alternative to face the issues of both the depletion of natural resources and the congestion of waste storage facilities. However, the crushing process of concrete demolition waste, currently in use to produce recycled concrete aggregate, does not allow the complete separation of natural aggregate from a variable amount of adhered mortar. Given the physicochemical characteristics of the latter, the introduction of recycled concrete aggregate into a concrete mix modifies, to a certain extent, both fresh and hardened concrete properties. As a consequence, the behavior of recycled reinforced concrete members could likely be influenced by the specificities of recycled concrete aggregates. Beyond the mechanical properties of concrete, and as a result of the composite character of reinforced concrete, the bond characteristics at the rebar/concrete interface have to be taken into account in an attempt to describe accurately the mechanical response of recycled reinforced concrete members. Hence, a comparative experimental campaign, including 16 pull-out tests, was carried out. Four concrete mixes with different recycled concrete aggregate content were tested. The main mechanical properties (compressive strength, tensile strength, Young’s modulus) of each concrete mix were measured through standard procedures. A single 14-mm-diameter ribbed rebar, representative of the diameters commonly used in the domain of civil engineering, was embedded into a 200-mm-side concrete cube. The resulting concrete cover is intended to ensure a pull-out type failure (i.e. exceedance of the rebar/concrete interface shear strength). A pull-out test carried out on the 100% recycled concrete specimen was enriched with exploratory acoustic emission measurements. Acoustic event location was performed by means of eight piezoelectric transducers distributed over the whole surface of the specimen. The resulting map was compared to existing data related to natural aggregate concrete. Damage distribution around the reinforcement and main features of the characteristic bond stress/free-end slip curve appeared to be similar to previous results obtained through comparable studies carried out on natural aggregate concrete. This seems to show that the usual bond mechanism sequence (‘chemical adhesion’, mechanical interlocking and friction) remains unchanged despite the addition of recycled concrete aggregate. However, the results also suggest that bond efficiency seems somewhat improved through the use of recycled concrete aggregate. This observation appears to be counter-intuitive with regard to the diminution of the main concrete mechanical properties with the recycled concrete aggregate content. As a consequence, the impact of recycled concrete aggregate content on bond characteristics seemingly represents an important factor which should be taken into account and likely to be further explored in order to determine flexural parameters such as deflection or crack distribution.Keywords: acoustic emission monitoring, high-bond steel rebar, pull-out test, recycled aggregate concrete
Procedia PDF Downloads 171