Search results for: applied statistics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9783

Search results for: applied statistics

1233 Sustainable Organization for Sustainable Strategy: An Empirical Evidence

Authors: Lucia Varra, Marzia Timolo

Abstract:

The interest of scholars towards corporate sustainability has strengthened in recent years in parallel with the growing need to undertake paths of cultural and organizational change, as a way for greater competitiveness and stakeholders’ satisfaction. In fact, studies on the business sustainability, while on the one hand have integrated the three dimensions of sustainability that existed for some time in the economic approaches (economic, environmental and social dimensions), on the other hand did not give rise to an organic construct that puts together the aspects of strategic management with corporate social responsibility and even less with the organizational issues. Therefore some important questions remain open: Which organizational structure and which operational mechanisms are coherent or propitious to a sustainability strategy? Existing studies appear to be fragmented, although some aspects have shared importance: knowledge management, human resource, management, leadership, innovation, etc. The construction of a model of sustainable organization that supports the sustainability strategy no longer seems to be postponed, as is its connection with the main practices of measuring corporate social responsibility performance. The paper aims to identify the organizational characteristics of a sustainable corporate. To this end, from a theoretical point of view the work examines the main existing literary contributions and, from a practical point of view, it presents a business case referring to a service organization that for years has undertaken the sustainability strategy. This paper is divided into two parts: the first part concerns a review of the main articles on the strategic management topic and the main organizational issues raised by the literature, such as knowledge management, leadership, innovation, etc.; later, a modeling of the main variables examined by scholars and an integration of these with the international measurement standards of CSR is proposed. In the second part, using the methodology of the case study company, the hypotheses and the structure of the proposed model that aims to integrate the strategic issues with the organizational aspects and measurement of sustainability performance, are applied to an Italian company, which has some organizational and human resource management interventions are in place to align strategic decisions with the structure and operating mechanisms of the structure. The case presented supports the hypotheses of the model.

Keywords: CSR, strategic management, sustainable leadership, sustainable human resource management, sustainable organization

Procedia PDF Downloads 81
1232 Electroactivity of Clostridium saccharoperbutylacetonicum 1-4N during Carbon Dioxide Reduction in a Bioelectrosynthesis System

Authors: Carlos A. Garcia-Mogollon, Juan C. Quintero-Diaz, Claudio Avignone-Rossa

Abstract:

Clostridium saccharoperbutylacetonicum 1-4N (Csb 1-4N) is an industrial reference strain for Acetone-Butanol-Ethanol (ABE) fermentation. Csb 1-4N is a solventogenic clostridium and H₂ producer with a metabolic profile that makes it a good candidate for Bioelectrosynthesis System (BES). The aim of this study was to evaluate the electroactivity of Csb 1-4N by cyclic voltammetry technique (CV). The Bioelectrosynthesis fermentation (BES) started in a Triptone-Yeast extract (TY) medium with trace elements and vitamins, Complex Nitrogen Source (CNS), and bicarbonate (NaHCO₃, 4g/L) as a carbon source, run at -600mVAg/AgCl and adding 200uM NADH. The six BES batches were performed with different media composition with and without NADH, CNS, HCO₃⁻ , and applied potential. The CV was performed as three-electrode system: platinum slice working electrode (WE), nickel contra electrode (CE) and reference electrode Ag/AgCl (ER). CVs were run in a potential range of -0.7V to 0.7V vs. VAg/AgCl at a scan rate 10mV/s. A CV recorded using different NaHCO₃ concentrations (0.25; 0.5; 1.0; 4g/L) were obtained. BES fermentation samples were centrifuged (3000 rpm, 5min, 4C), and supernatant (7mL) was used. CVs were obtained for Csb1-4N BES culture cell-free supernatant at 0h, 24h, and 48h. The electrochemical analysis was carried out with a PalmSens 4.0 potentiostat/galvanostat controlled with the PStrace 5.7 software, and CVs curves were characterized by reduction and oxidation currents and reduction and oxidation peaks. The CVs obtained for NaHCO₃ solutions showed that the reduction current and oxidation current decreased as the NaHCO₃ concentration was decreased. All reduction and oxidation currents decreased until exponential growth stop (24h), independence of initial cathodic current, except in medium with trace elements, vitamins, and NaHCO3, in which reduction current was around half at 24h and followed decreasing at 48. In this medium, Csb1-4N did not grow, but pH was increased, indicating that NaHCO₃ was reduced as the reduction current decreased. In general, at 48h reduction currents did not present important changes between different mediums in BES cultures. In terms of intensities in the peaks (Ip) did not present important variations; except with Ipa and Ipc in BES culture with NaHCO₃ and NADH added are higher than peaks in other cultures. Based on results, cathodic and anodic currents changes were induced by NaHCO₃ reduction reactions during Csb1-4N metabolic activity in different BES experiments.

Keywords: clostridium saccharoperbutylacetonicum 1-4N, bioelectrosynthesis, carbon dioxide fixation, cyclic voltammetry

Procedia PDF Downloads 110
1231 Calcein Release from Liposomes Mediated by Phospholipase A₂ Activity: Effect of Cholesterol and Amphipathic Di and Tri Blocks Copolymers

Authors: Marco Soto-Arriaza, Eduardo Cena-Ahumada, Jaime Melendez-Rojel

Abstract:

Background: Liposomes have been widely used as a model of lipid bilayer to study the physicochemical properties of biological membrane, encapsulation, transport and release of different molecules. Furthermore, extensive research has focused on improving the efficiency in the transport of drugs, developing tools that improve the release of the encapsulated drug from liposomes. In this context, the enzymatic activity of PLA₂, despite having been shown to be an effective tool to promote the release of drugs from liposomes, is still an open field of research. Aim: The aim of the present study is to explore the effect of cholesterol (Cho) and amphipathic di- and tri-block copolymers, on calcein release mediated by enzymatic activity of PLA2 in Dipalmitoylphosphatidylcholine (DPPC) liposomes under physiological conditions. Methods: Different dispersions of DPPC, cholesterol, di-block POE₄₅-PCL₅₂ or tri-block PCL₁₂-POE₄₅-PCL₁₂ were prepared by the extrusion method after five freezing/thawing cycles; in Phosphate buffer 10mM pH 7.4 in presence of calcein. DPPC liposomes/Calcein were centrifuged at 15000rpm 10 min to separate free calcein. Enzymatic activity assays of PLA₂ were performed at 37°C using the TBS buffer pH 7.4. The size distribution, polydispersity, Z-potential and Calcein encapsulation of DPPC liposomes was monitored. Results: PLA₂ activity showed a slower kinetic of calcein release up to 20 mol% of cholesterol, evidencing a minimum at 10 mol% and then a maximum at 18 mol%. Regardless of the percentage of cholesterol, up to 18 mol% a one-hundred percentage release of calcein was observed. At higher cholesterol concentrations, PLA₂ showed to be inefficient or not to be involved in calcein release. In assays where copolymers were added in a concentration lower than their cmc, a similar behavior to those showed in the presence of Cho was observed, that is a slower kinetic in calcein release. In both experimental approaches, a one-hundred percentage of calcein release was observed. PLA₂ was shown to be sensitive to the 4-(4-Octadecylphenyl)-4-oxobutenoic acid inhibitor and calcium, reducing the release of calcein to 0%. Cell viability of HeLa cells decreased 7% in the presence of DPPC liposomes after 3 hours of incubation and 17% and 23% at 5 and 15 hours, respectively. Conclusion: Calcein release from DPPC liposomes, mediated by PLA₂ activity, depends on the percentage of cholesterol and the presence of copolymers. Both, cholesterol up to 20 mol% and copolymers below it cmc could be applied to the regulation of the kinetics of antitumoral drugs release without inducing cell toxicity per se.

Keywords: amphipathic copolymers, calcein release, cholesterol, DPPC liposome, phospholipase A₂

Procedia PDF Downloads 131
1230 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 216
1229 Use of Pig as an Animal Model for Assessing the Differential MicroRNA Profiling in Kidney after Aristolochic Acid Intoxication

Authors: Daniela E. Marin, Cornelia Braicu, Gina C. Pistol, Roxana Cojocneanu-Petric, Ioana Berindan Neagoe, Mihail A. Gras, Ionelia Taranu

Abstract:

Aristolochic acid (AA) is a carcinogenic, mutagenic, and nephrotoxic compound commonly found in the Aristolochiaceae family of plants. AA is frequently associated with urothelial carcinoma of the upper urinary tract in human and animals and is considered as being responsible for Balkan Endemic Nephropathy. The pig provides a good animal model because the porcine urological system is very similar to that of humans, both in aspects of physiology and anatomy. MicroRNA (miRNA) are small non-coding RNAs that have an impact on a wide range of biological processes by regulating gene expression at post-transcriptional level. The objective of this study was to analyze the miRNA profiling in the kidneys of AA intoxicated swine. For this purpose, ten TOPIGS-40 crossbred weaned piglets, 4-week-old, male and females with an initial average body weight of 9.83 ± 0.5 kg were studied for 28 days. They were given ad libitum access to water and feed and randomly allotted to one of the following groups: control group (C) or aristolochic acid group (AA). They were fed a maize-soybean-meal-based diet contaminated or not with 0.25mgAA/kg. To profile miRNA in the kidneys of pigs, microarrays and bioinformatics approaches were applied to analyze the miRNA in the kidney of control and AA intoxicated pigs. After normalization, our results have shown that a total of 5 known miRNAs and 4 novel miRNAs had different profiling in the kidney of intoxicated animals versus control ones. Expression of miR-32-5p, miR-497-5p, miR-423-3p, miR-218-5p, miR-128-3p were up-regulated by 0.25mgAA/kg feed, while the expression of miR-9793-5p, miR-9835-3p, miR-9840-3p, miR-4334-5p was down-regulated. The microRNA profiling in kidney of intoxicated animals was associated with modified expression of target genes as: RICTOR, LASP1, SFRP2, DKK2, BMI1, RAF1, IGF1R, MAP2K1, WEE1, HDGF, BCL2, EIF4E etc, involved in cell division cycle, apoptosis, cell differentiation and cell migration, cell signaling, cancer etc. In conclusion, this study provides new data concerning the microRNA profiling in kidney after aristolochic acid intoxications with important implications for human and animal health.

Keywords: aristolochic acid, kidney, microRNA, swine

Procedia PDF Downloads 255
1228 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 51
1227 Cross-Cultural Collaboration Shaping Co-Creation Methodology to Enhance Disaster Risk Management Approaches

Authors: Jeannette Anniés, Panagiotis Michalis, Chrysoula Papathanasiou, Selby Knudsen

Abstract:

RiskPACC project aims to bring together researchers, practitioners, and first responders from nine European countries following a co-creation approach aiming to develop customised solutions to meet the needs of end-users. The co-creation workshops target to enhance the communication pathways between local civil protection authorities (CPAs) and citizens, in an effort to close the risk perception-action gap (RPAG). The participants in the workshops include a variety of stakeholders, as well as citizens, fostering the dialogue between the groups and supporting citizen participation in disaster risk management (DRM). The co-creation methodology in place implements co-design elements due to the integration of four ICT tools. Such ICT tools include web-based and mobile application technical solutions in different development stages, ranging from formulation and validation of concepts to pilot demonstrations. In total, seven different case studies are foreseen in RiskPACC. The workflow of the workshops is designed to be adaptive to every of the seven case study countries and their cultures’ particular needs. This work aims to provide an overview of the the preparation and the conduction of the workshops in which researchers and practitioners focused on mapping these different needs from the end users. The latter included first responders but also volunteers and citizens who actively participated in the co-creation workshops. The strategies to improve communication between CPAs and citizens themselves differ in the countries, and the modules of the co-creation methodology are adapted in response to such differences. Moreover, the project partners experienced how the structure of such workshops is perceived differently in the seven case studies. Therefore, the co-creation methodology itself is a design method underlying several iterations, which are eventually shaped by cross-cultural collaboration. For example, some case studies applied other modules according to the participatory group recruited. The participants were technical experts, teachers, citizens, first responders, or volunteers, among others. This work aspires to present the divergent approaches of the seven case studies implementing the co-creation methodology proposed, in response to different perceptions of the modules. An analysis of the adaptations and implications will also be provided to assess where the case studies’ objective of improving disaster resilience has been obtained.

Keywords: citizen participation, co-creation, disaster resilience, risk perception, ICT tools

Procedia PDF Downloads 53
1226 Evaluating the Potential of a Fast Growing Indian Marine Cyanobacterium by Reconstructing and Analysis of a Genome Scale Metabolic Model

Authors: Ruchi Pathania, Ahmad Ahmad, Shireesh Srivastava

Abstract:

Cyanobacteria is a promising microbe that can capture and convert atmospheric CO₂ and light into valuable industrial bio-products like biofuels, biodegradable plastics, etc. Among their most attractive traits are faster autotrophic growth, whole year cultivation using non-arable land, high photosynthetic activity, much greater biomass and productivity and easy for genetic manipulations. Cyanobacteria store carbon in the form of glycogen which can be hydrolyzed to release glucose and fermented to form bioethanol or other valuable products. Marine cyanobacterial species are especially attractive for countries with scarcity of freshwater. We recently identified a marine native cyanobacterium Synechococcus sp. BDU 130192 which has good growth rate and high level of polyglucans accumulation compared to Synechococcus PCC 7002. In this study, firstly we sequenced the whole genome and the sequences were annotated using the RAST server. Genome scale metabolic model (GSMM) was reconstructed through COBRA toolbox. GSMM is a computational representation of the metabolic reactions and metabolites of the target strain. GSMMs construction through the application of Flux Balance Analysis (FBA), which uses external nutrient uptake rates and estimate steady state intracellular and extracellular reaction fluxes, including maximization of cell growth. The model, which we have named isyn942, includes 942 reactions and 913 metabolites having 831 metabolic, 78 transport and 33 exchange reactions. The phylogenetic tree obtained by BLAST search revealed that the strain was a close relative of Synechococcus PCC 7002. The flux balance analysis (FBA) was applied on the model iSyn942 to predict the theoretical yields (mol product produced/mol CO₂ consumed) for native and non-native products like acetone, butanol, etc. under phototrophic condition by applying metabolic engineering strategies. The reported strain can be a viable strain for biotechnological applications, and the model will be helpful to researchers interested in understanding the metabolism as well as to design metabolic engineering strategies for enhanced production of various bioproducts.

Keywords: cyanobacteria, flux balance analysis, genome scale metabolic model, metabolic engineering

Procedia PDF Downloads 124
1225 Maneuvering Modelling of a One-Degree-of-Freedom Articulated Vehicle: Modeling and Experimental Verification

Authors: Mauricio E. Cruz, Ilse Cervantes, Manuel J. Fabela

Abstract:

The evaluation of the maneuverability of road vehicles is generally carried out through the use of specialized computer programs due to the advantages they offer compared to the experimental method. These programs are based on purely geometric considerations of the characteristics of the vehicles, such as main dimensions, the location of the axles, and points of articulation, without considering parameters such as weight distribution and magnitude, tire properties, etc. In this paper, we address the problem of maneuverability in a semi-trailer truck to navigate urban streets, maneuvering yards, and parking lots, using the Ackerman principle to propose a kinematic model that, through geometric considerations, it is possible to determine the space necessary to maneuver safely. The model was experimentally validated by conducting maneuverability tests with an articulated vehicle. The measurements were made through a GPS that allows us to know the position, trajectory, and speed of the vehicle, an inertial motion unit (IMU) that allows measuring the accelerations and angular speeds in the semi-trailer, and an instrumented steering wheel that allows measuring the angle of rotation of the flywheel, the angular velocity and the torque applied to the flywheel. To obtain the steering angle of the tires, a parameterization of the complete travel of the steering wheel and its equivalent in the tires was carried out. For the tests, 3 different angles were selected, and 3 turns were made for each angle in both directions of rotation (left and right turn). The results showed that the proposed kinematic model achieved 95% accuracy for speeds below 5 km / h. The experiments revealed that that tighter maneuvers increased significantly the space required and that the vehicle maneuverability was limited by the size of the semi-trailer. The maneuverability was also tested as a function of the vehicle load and 3 different load levels we used: light, medium, and heavy. It was found that the internal turning radii also increased with the load, probably due to the changes in the tires' adhesion to the pavement since heavier loads had larger contact wheel-road surfaces. The load was found as an important factor affecting the precision of the model (up to 30%), and therefore I should be considered. The model obtained is expected to be used to improve maneuverability through a robust control system.

Keywords: articuled vehicle, experimental validation, kinematic model, maneuverability, semi-trailer truck

Procedia PDF Downloads 98
1224 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 126
1223 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 258
1222 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 182
1221 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity

Authors: Jelena Radmanovic

Abstract:

On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.

Keywords: Afghanistan, impunity, international criminal court, sanctions, United States

Procedia PDF Downloads 101
1220 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors

Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro

Abstract:

Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.

Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method

Procedia PDF Downloads 217
1219 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 244
1218 Understanding the Nature of Blood Pressure as Metabolic Syndrome Component in Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Pediatric overweight and obesity need attention because they may cause morbid obesity, which may develop metabolic syndrome (MetS). Criteria used for the definition of adult MetS cannot be applied for pediatric MetS. Dynamic physiological changes that occur during childhood and adolescence require the evaluation of each parameter based upon age intervals. The aim of this study is to investigate the distribution of blood pressure (BP) values within diverse pediatric age intervals and the possible use and clinical utility of a recently introduced Diagnostic Obesity Notation Model Assessment Tension (DONMA tense) Index derived from systolic BP (SBP) and diastolic BP (DBP) [SBP+DBP/200]. Such a formula may enable a more integrative picture for the assessment of pediatric obesity and MetS due to the use of both SBP and DBP. 554 children, whose ages were between 6-16 years participated in the study; the study population was divided into two groups based upon their ages. The first group comprises 280 cases aged 6-10 years (72-120 months), while those aged 10-16 years (121-192 months) constituted the second group. The values of SBP, DBP and the formula (SBP+DBP/200) covering both were evaluated. Each group was divided into seven subgroups with varying degrees of obesity and MetS criteria. Two clinical definitions of MetS have been described. These groups were MetS3 (children with three major components), and MetS2 (children with two major components). The other groups were morbid obese (MO), obese (OB), overweight (OW), normal (N) and underweight (UW). The children were included into the groups according to the age- and sex-based body mass index (BMI) percentile values tabulated by WHO. Data were evaluated by SPSS version 16 with p < 0.05 as the statistical significance degree. Tension index was evaluated in the groups above and below 10 years of age. This index differed significantly between N and MetS as well as OW and MetS groups (p = 0.001) above 120 months. However, below 120 months, significant differences existed between MetS3 and MetS2 (p = 0.003) as well as MetS3 and MO (p = 0.001). In comparison with the SBP and DBP values, tension index values have enabled more clear-cut separation between the groups. It has been detected that the tension index was capable of discriminating MetS3 from MetS2 in the group, which was composed of children aged 6-10 years. This was not possible in the older group of children. This index was more informative for the first group. This study also confirmed that 130 mm Hg and 85 mm Hg cut-off points for SBP and DBP, respectively, are too high for serving as MetS criteria in children because the mean value for tension index was calculated as 1.00 among MetS children. This finding has shown that much lower cut-off points must be set for SBP and DBP for the diagnosis of pediatric MetS, especially for children under-10 years of age. This index may be recommended to discriminate MO, MetS2 and MetS3 among the 6-10 years of age group, whose MetS diagnosis is problematic.

Keywords: blood pressure, children, index, metabolic syndrome, obesity

Procedia PDF Downloads 97
1217 Evaluating the Ability to Cycle in Cities Using Geographic Information Systems Tools: The Case Study of Greek Modern Cities

Authors: Christos Karolemeas, Avgi Vassi, Georgia Christodoulopoulou

Abstract:

Although the past decades, planning a cycle network became an inseparable part of all transportation plans, there is still a lot of room for improvement in the way planning is made, in order to create safe and direct cycling networks that gather the parameters that positively influence one's decision to cycle. The aim of this article is to study, evaluate and visualize the bikeability of cities. This term is often used as the 'the ability of a person to bike' but this study, however, adopts the term in the sense of bikeability as 'the ability of the urban landscape to be biked'. The methodology used included assessing cities' accessibility by cycling, based on international literature and corresponding walkability methods and the creation of a 'bikeability index'. Initially, a literature review was made to identify the factors that positively affect the use of bicycle infrastructure. Those factors were used in order to create the spatial index and quantitatively compare the city network. Finally, the bikeability index was applied in two case studies: two Greek municipalities that, although, they have similarities in terms of land uses, population density and traffic congestion, they are totally different in terms of geomorphology. The factors suggested by international literature were (a) safety, (b) directness, (c) comfort and (d) the quality of the urban environment. Those factors were quantified through the following parameters: slope, junction density, traffic density, traffic speed, natural environment, built environment, activities coverage, centrality and accessibility to public transport stations. Each road section was graded for the above-mentioned parameters, and the overall grade shows the level of bicycle accessibility (low, medium, high). Each parameter, as well as the overall accessibility levels, were analyzed and visualized through Geographic Information Systems. This paper presents the bikeability index, its' results, the problems that have arisen and the conclusions from its' implementation through Strengths-Weaknesses-Opportunities-Threats analysis. The purpose of this index is to make it easy for researchers, practitioners, politicians, and stakeholders to quantify, visualize and understand which parts of the urban fabric are suitable for cycling.

Keywords: accessibility, cycling, green spaces, spatial data, urban environment

Procedia PDF Downloads 92
1216 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen

Abstract:

After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.

Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity

Procedia PDF Downloads 73
1215 The Structure of Financial Regulation: The Regulators Perspective

Authors: Mohamed Aljarallah, Mohamed Nurullah, George Saridakis

Abstract:

This paper aims and objectives are to investigate how the structural change of the financial regulatory bodies affect the financial supervision and how the regulators can design such a structure with taking into account; the Central Bank, the conduct of business and the prudential regulators, it will also consider looking at the structure of the international regulatory bodies and what barriers are found. There will be five questions to be answered; should conduct of business and prudential regulation be separated? Should the financial supervision and financial stability be separated? Should the financial supervision be under the Central Bank? To what extent the politician should intervene in changing the regulatory and supervisory structure? What should be the regulatory and supervisory structure when there is financial conglomerate? Semi structure interview design will be applied. This research sample selection contains a collective of financial regulators and supervisors from the emerged and emerging countries. Moreover, financial regulators and supervisors must be at a senior level at their organisations. Additionally, senior financial regulators and supervisors would come from different authorities and from around the world. For instance, one of the participants comes from the International Bank Settlements, others come from European Central Bank, and an additional one will come from Hong Kong Monetary Authority and others. Such a variety aims to fulfil the aims and objectives of the research and cover the research questions. The analysis process starts with transcription of the interview, using Nvivo software for coding, applying thematic interview to generate the main themes. The major findings of the study are as follow. First, organisational structure changes quite frequently if the mandates are not clear. Second, measuring structural change is difficult, which makes the whole process unclear. Third, effective coordination and communication are what regulators looking for when they change the structure and that requires; openness, trust, and incentive. In addition to that, issues appear during the event of crisis tend to be the reason why the structure change. Also, the development of the market sometime causes a change in the regulatory structure. And, some structural change occurs simply because of the international trend, fashion, or other countries' experiences. Furthermore, when the top management change the structure tends to change. Moreover, the structure change due to the political change, or politicians try to show they are doing something. Finally, fear of being blamed can be a driver of structural change. In conclusion, this research aims to provide an insight from the senior regulators and supervisors from fifty different countries to have a clear understanding of why the regulatory structure keeps changing from time to time through a qualitative approach, namely, semi-structure interview.

Keywords: financial regulation bodies, financial regulatory structure, global financial regulation, financial crisis

Procedia PDF Downloads 116
1214 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 68
1213 Obtainment of Systems with Efavirenz and Lamellar Double Hydroxide as an Alternative for Solubility Improvement of the Drug

Authors: Danilo A. F. Fontes, Magaly A. M.Lyra, Maria L. C. Moura, Leslie R. M. Ferraz, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim, Giovanna C. R. M. Schver, Ping I. Lee, Severino Alves-Júnior, José L. Soares-Sobrinho, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a first-choice drug in antiretroviral therapy with high efficacy in the treatment of infection by Human Immunodeficiency Virus, which causes Acquired Immune Deficiency Syndrome (AIDS). EFV has low solubility in water resulting in a decrease in the dissolution rate and, consequently, in its bioavailability. Among the technological alternatives to increase solubility, the Lamellar Double Hydroxides (LDH) have been applied in the development of systems with poorly water-soluble drugs. The use of analytical techniques such as X-Ray Diffraction (XRD), Infrared Spectroscopy (IR) and Differential Scanning Calorimetry (DSC) allowed the elucidation of drug interaction with the lamellar compounds. The objective of this work was to characterize and develop the binary systems with EFV and LDH in order to increase the solubility of the drug. The LDH-CaAl was synthesized by the method of co-precipitation from salt solutions of calcium nitrate and aluminum nitrate in basic medium. The systems EFV-LDH and their physical mixtures (PM) were obtained at different concentrations (5-60% of EFV) using the solvent technique described by Takahashi & Yamaguchi (1991). The characterization of the systems and the PM’s was performed by XRD techniques, IR, DSC and dissolution test under non-sink conditions. The results showed improvements in the solubility of EFV when associated with LDH, due to a possible change in its crystal structure and formation of an amorphous material. From the DSC results, one could see that the endothermic peak at 173°C, temperature that correspond to the melting process of EFZ in the crystal form, was present in the PM results. For the EFZ-LDH systems (with 5, 10 and 30% of drug loading), this peak was not observed. XRD profiles of the PM showed well-defined peaks for EFV. Analyzing the XRD patterns of the systems, it was found that the XRD profiles of all the systems showed complete attenuation of the characteristic peaks of the crystalline form of EFZ. The IR technique showed that, in the results of the PM, there was the appearance of one band and overlap of other bands, while the IR results of the systems with 5, 10 and 30% drug loading showed the disappearance of bands and a few others with reduced intensity. The dissolution test under non-sink conditions showed that systems with 5, 10 and 30% drug loading promoted a great increase in the solubility of EFV, but the system with 10% of drug loading was the only one that could keep substantial amount of drug in solution at different pHs.

Keywords: Efavirenz, Lamellar Double Hydroxides, Pharmaceutical Techonology, Solubility

Procedia PDF Downloads 550
1212 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 256
1211 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study

Authors: Ndibarefinia Tobin

Abstract:

The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.

Keywords: knowledge management, whole life costing, construction industry, knowledge

Procedia PDF Downloads 219
1210 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials

Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen

Abstract:

Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.

Keywords: placebo analgesia, aging, acute pain, TENS, EEG

Procedia PDF Downloads 115
1209 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment

Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad

Abstract:

The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.

Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility

Procedia PDF Downloads 99
1208 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 141
1207 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 442
1206 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal

Authors: Saad Mohamed Elsaid Onaizah

Abstract:

One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.

Keywords: waste water, pesticides pollution, adsorption, activated carbon

Procedia PDF Downloads 43
1205 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 243
1204 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 261