Search results for: Alkire-Foster methodology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5348

Search results for: Alkire-Foster methodology

4688 Carbon Footprint of Educational Establishments: The Case of the University of Alicante

Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi

Abstract:

Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.

Keywords: building operations, built environment, carbon footprint, climate change, transport

Procedia PDF Downloads 295
4687 A Critical Reflection of Ableist Methodologies: Approaching Interviews and Go-Along Interviews

Authors: Hana Porkertová, Pavel Doboš

Abstract:

Based on a research project studying the experience of visually disabled people with urban space in the Czech Republic, the conference contribution discusses the limits of social-science methodologies used in sociology and human geography. It draws on actor-network theory, assuming that science does not describe reality but produces it. Methodology connects theory, research questions, ways to answer them (methods), and results. A research design utilizing ableist methodologies can produce ableist realities. Therefore, it was necessary to adjust the methods so that they could mediate blind experience to the scientific community without reproducing ableism. The researchers faced multiple challenges, ranging from questionable validity to how to research experience that differs from that of the researchers who are able-bodied. Finding a suitable theory that could be used as an analytical tool that would demonstrate space and blind experience as multiple, dynamic, and mutually constructed was the first step that could offer a range of potentially productive methods and research questions, as well as bring critically reflected results. Poststructural theory, mainly Deleuze-Guattarian philosophy, was chosen, and two methods were used: interviews and go-along interviews that had to be adjusted to be able to explore blind experience. In spite of a thorough preparation of these methods, new difficulties kept emerging, which exposed the ableist character of scientific knowledge. From the beginning of data collecting, there was an agreement to work in teams with slightly different roles of each of the researchers, which was significant especially during go-along interviews. In some cases, the anticipations of the researchers and participants differed, which led to unexpected and potentially dangerous situations. These were not caused only by the differences between scientific and lay communities but also between able-bodied and disabled people. Researchers were sometimes assigned to the assistants’ roles, and this new position – doing research together – required further negotiations, which also opened various ethical questions.

Keywords: ableist methodology, blind experience, go-along interviews, research ethics, scientific knowledge

Procedia PDF Downloads 165
4686 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene

Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen

Abstract:

A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.

Keywords: CdS quantum dots, modification, detection, naphthalene

Procedia PDF Downloads 493
4685 Pleated Surfaces: Experimentation and Examples

Authors: Maritza Granados Manjarrés

Abstract:

This paper makes part of an investigation project which experiments with flat surfaces in order to pleat them using tessellations and flat origami conditions. The aim of the investigation is to eventually propose not only a methodology on how to pleat those surfaces but also to find an structural system to make them work as building skins. This stage of the investigation emphasizes on the experimentation with flat surfaces and different kinds of folding patterns and shows the many examples that can be made from this experimentation.

Keywords: flat origami, fold, space, surface

Procedia PDF Downloads 291
4684 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 76
4683 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves

Authors: Aymen Laadhari

Abstract:

During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.

Keywords: eulerian, level set, newton, valve

Procedia PDF Downloads 278
4682 Enriched Education: The Classroom as a Learning Network through Video Game Narrative Development

Authors: Wayne DeFehr

Abstract:

This study is rooted in a pedagogical approach that emphasizes student engagement as fundamental to meaningful learning in the classroom. This approach creates a paradigmatic shift, from a teaching practice that reinforces the teacher’s central authority to a practice that disperses that authority among the students in the classroom through networks that they themselves develop. The methodology of this study about creating optimal conditions for learning in the classroom includes providing a conceptual framework within which the students work, as well as providing clearly stated expectations for work standards, content quality, group methodology, and learning outcomes. These learning conditions are nurtured in a variety of ways. First, nearly every class includes a lecture from the professor with key concepts that students need in order to complete their work successfully. Secondly, students build on this scholarly material by forming their own networks, where students face each other and engage with each other in order to collaborate their way to solving a particular problem relating to the course content. Thirdly, students are given short, medium, and long-term goals. Short term goals relate to the week’s topic and involve workshopping particular issues relating to that stage of the course. The medium-term goals involve students submitting term assignments that are evaluated according to a well-defined rubric. And finally, long-term goals are achieved by creating a capstone project, which is celebrated and shared with classmates and interested friends on the final day of the course. The essential conclusions of the study are drawn from courses that focus on video game narrative. Enthusiastic student engagement is created not only with the dynamic energy and expertise of the instructor, but also with the inter-dependence of the students on each other to build knowledge, acquire skills, and achieve successful results.

Keywords: collaboration, education, learning networks, video games

Procedia PDF Downloads 115
4681 Valorisation of Mango Seed: Response Surface Methodology Based Optimization of Starch Extraction from Mango Seeds

Authors: Tamrat Tesfaye, Bruce Sithole

Abstract:

Box-Behnken Response surface methodology was used to determine the optimum processing conditions that give maximum extraction yield and whiteness index from mango seed. The steeping time ranges from 2 to 12 hours and slurring of the steeped seed in sodium metabisulphite solution (0.1 to 0.5 w/v) was carried out. Experiments were designed according to Box-Behnken Design with these three factors and a total of 15 runs experimental variables of were analyzed. At linear level, the concentration of sodium metabisulphite had significant positive influence on percentage yield and whiteness index at p<0.05. At quadratic level, sodium metabisulphite concentration and sodium metabisulphite concentration2 had a significant negative influence on starch yield; sodium metabisulphite concentration and steeping time*temperature had significant (p<0.05) positive influence on whiteness index. The adjusted R2 above 0.8 for starch yield (0.906465) and whiteness index (0.909268) showed a good fit of the model with the experimental data. The optimum sodium metabisulphite concentration, steeping hours, and temperature for starch isolation with maximum starch yield (66.428%) and whiteness index (85%) as set goals for optimization with the desirability of 0.91939 was 0.255w/v concentration, 2hrs and 50 °C respectively. The determined experimental value of each response based on optimal condition was statistically in accordance with predicted levels at p<0.05. The Mango seeds are the by-products obtained during mango processing and possess disposal problem if not handled properly. The substitution of food based sizing agents with mango seed starch can contribute as pertinent resource deployment for value-added product manufacturing and waste utilization which might play significance role of food security in Ethiopia.

Keywords: mango, synthetic sizing agent, starch, extraction, textile, sizing

Procedia PDF Downloads 231
4680 Fabrication Methodologies for Anti-Microbial Polypropylene Surfaces with Leachable and Non-leachable Anti-Microbial Agents

Authors: Saleh Alkarri, Dimple Sharma, Teresa M. Bergholz, Muhammad Rabnawaz

Abstract:

Aims: Develop a methodology for the fabrication of anti-microbial polypropylene (PP) surfaces with (i) leachable copper, (II) chloride dihydrate (CuCl₂·₂H₂O) and (ii) non-leachable magnesium hydroxide (Mg(OH)₂) biocides. Methods and Results: Two methodologies are used to develop anti-microbial PP surfaces. One method involves melt-blending and subsequent injection molding, where the biocide additives were compounded with PP and subsequently injection-molded. The other method involves the thermal embossing of anti-microbial agents on the surface of a PP substrate. The obtained biocide-bearing PP surfaces were evaluated against E. coli K-12 MG1655 for 0, 4, and 24 h to evaluate their anti-microbial properties. The injection-molded PP bearing 5% CuCl2·₂H₂O showed a 6-log reduction of E. coli K-12 MG1655 after 24 h, while only 1 log reduction was observed for PP bearing 5% Mg(OH)2. The thermally embossed PP surfaces bearing CuCl2·2H2O and Mg(OH)₂ particles (at a concentration of 10 mg/mL) showed 3 log and 4 log reduction, respectively, against E.coli K-12 MG1655 after 24 h. Conclusion: The results clearly demonstrate that CuCl₂·2H₂O conferred anti-microbial properties to PP surfaces that were prepared by both injection molding as well as thermal embossing approaches owing to the presence of leachable copper ions. In contrast, the non-leachable Mg(OH)₂ imparted anti-microbial properties only to the surface prepared via the thermal embossing technique. Significance and Impact of The Study: Plastics with leachable biocides are effective anti-microbial surfaces, but their toxicity is a major concern. This study provides a fabrication methodology for non-leachable PP-based anti-microbial surfaces that are potentially safer. In addition, this strategy can be extended to many other plastics substrates.

Keywords: anti-microbial activity, E. coli K-12 MG1655, copper (II) chloride dihydrate, magnesium hydroxide, leachable, non-leachable, compounding, thermal embossing

Procedia PDF Downloads 78
4679 Fabrication Methodologies for Anti-microbial Polypropylene Surfaces with Leachable and Non-leachable Anti-microbial Agents

Authors: Saleh Alkarri, Dimple Sharma, Teresa M. Bergholz, Muhammad Rabnawa

Abstract:

Aims: Develop a methodology for the fabrication of anti-microbial polypropylene (PP) surfaces with (i) leachable copper (II) chloride dihydrate (CuCl2·2H2O) and (ii) non-leachable magnesium hydroxide (Mg(OH)2) biocides. Methods and Results: Two methodologies are used to develop anti-microbial PP surfaces. One method involves melt-blending and subsequent injection molding, where the biocide additives were compounded with PP and subsequently injection-molded. The other method involves the thermal embossing of anti-microbial agents on the surface of a PP substrate. The obtained biocide-bearing PP surfaces were evaluated against E. coli K-12 MG1655 for 0, 4, and 24 h to evaluate their anti-microbial properties. The injection-molded PP bearing 5% CuCl2·2H2O showed a 6-log reduction of E. coli K-12 MG1655 after 24 h, while only 1 log reduction was observed for PP bearing 5% Mg(OH)2. The thermally embossed PP surfaces bearing CuCl2·2H2O and Mg(OH)2 particles (at a concentration of 10 mg/mL) showed 3 log and 4 log reduction, respectively, against E.coli K-12 MG1655 after 24 h. Conclusion: The results clearly demonstrate that CuCl2·2H2O conferred anti-microbial properties to PP surfaces that were prepared by both injection molding as well as thermal embossing approaches owing to the presence of leachable copper ions. In contrast, the non-leachable Mg(OH)2 imparted anti-microbial properties only to the surface prepared via the thermal embossing technique. Significance and Impact of The Study: Plastics with leachable biocides are effective anti-microbial surfaces, but their toxicity is a major concern. This study provides a fabrication methodology for non-leachable PP-based anti-microbial surfaces that are potentially safer. In addition, this strategy can be extended to many other plastics substrates.

Keywords: anti-microbial activity, E. coli K-12 MG1655, copper (II) chloride dihydrate, magnesium hydroxide, leachable, non-leachable, compounding, thermal embossing

Procedia PDF Downloads 83
4678 Developing Research Involving Different Species: Opportunities and Empirical Foundations

Authors: A. V. Varfolomeeva, N. S. Tkachenko, A. G. Tishchenko

Abstract:

The problem of violation of internal validity in studies of psychological structures is considered. The role of epistemological attitudes of researchers in the planning of research within the methodology of the system-evolutionary approach is assessed. Alternative programs of psychological research involving representatives of different biological species are presented. On the example of the results of two research series the variants of solving the problem are discussed.

Keywords: epistemological attitudes, experimental design, validity, psychological structure, learning

Procedia PDF Downloads 115
4677 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 433
4676 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 231
4675 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 62
4674 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.

Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture

Procedia PDF Downloads 194
4673 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 84
4672 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 187
4671 Human Factors Integration of Chemical, Biological, Radiological and Nuclear Response: Systems and Technologies

Authors: Graham Hancox, Saydia Razak, Sue Hignett, Jo Barnes, Jyri Silmari, Florian Kading

Abstract:

In the event of a Chemical, Biological, Radiological and Nuclear (CBRN) incident rapidly gaining, situational awareness is of paramount importance and advanced technologies have an important role to play in improving detection, identification, monitoring (DIM) and patient tracking. Understanding how these advanced technologies can fit into current response systems is essential to ensure they are optimally designed, usable and meet end-users’ needs. For this reason, Human Factors (Ergonomics) methods have been used within an EU Horizon 2020 project (TOXI-Triage) to firstly describe (map) the hierarchical structure in a CBRN response with adapted Accident Map (AcciMap) methodology. Secondly, Hierarchical Task Analysis (HTA) has been used to describe and review the sequence of steps (sub-tasks) in a CBRN scenario response as a task system. HTA methodology was then used to map one advanced technology, ‘Tag and Trace’, which tags an element (people, sample and equipment) with a Near Field Communication (NFC) chip in the Hot Zone to allow tracing of (monitoring), for example casualty progress through the response. This HTA mapping of the Tag and Trace system showed how the provider envisaged the technology being used, allowing for review and fit with the current CBRN response systems. These methodologies have been found to be very effective in promoting and supporting a dialogue between end-users and technology providers. The Human Factors methods have given clear diagrammatic (visual) representations of how providers see their technology being used and how end users would actually use it in the field; allowing for a more user centered approach to the design process. For CBRN events usability is critical as sub-optimum design of technology could add to a responders’ workload in what is already a chaotic, ambiguous and safety critical environment.

Keywords: AcciMap, CBRN, ergonomics, hierarchical task analysis, human factors

Procedia PDF Downloads 222
4670 The Touristic Development of the Archaeological and Heritage Areas in Alexandria City, Egypt

Authors: Salma I. Dwidar, Amal A. Abdelsattar

Abstract:

Alexandria city is one of the greatest cities in the world. It confronted different civilizations throughout the ages due to its special geographical location and climate which left many archaeological areas of great heritage (Ptolemaic, Greek, Romanian, especially sunken monuments, Coptic, Islamic, and finally, the Modern). Also, Alexandria city contains areas with different patterns of urban planning, both Hellenistic and compacted planning which merited the diversity in planning. Despite the magnitude of this city, which contains all the elements of tourism, the city was not included in the tourism map of Egypt properly comparing with similar cities in Egypt. This paper discusses the importance of heritage areas in Alexandria and the relationship between heritage areas and modern buildings. It highlights the absence of a methodology to deal with heritage areas as touristic areas. Also, the paper aims to develop multiple touristic routes to visit archaeological areas and other sights of significance in Alexandria. The research methodology is divided into two main frameworks. The first framework is a historical study of the urban development of Alexandria and the most important remaining monuments throughout the ages, as well as an analytical study of sunken monuments and their importance in increasing the rate of tourism. Moreover, it covers a study of the importance of the Library of Alexandria and its effect on the international focus of the city. The second framework focuses on the proposal of some tourism routes to visit the heritage areas, archaeological monuments, sunken monuments and the sights of Alexandria. The study concludes with the proposal of three tourism routes. The first route, which is the longest one, passes by all the famous monuments of the city as well as its modern sights. The second route passes through the heritage areas, sunken monuments, and Library of Alexandria. The third route includes the sunken monuments and Library of Alexandria. These three tourism routes will ensures the touristic development of the city which leads to the economic growth of the city and the country.

Keywords: archeological buildings, heritage buildings, heritage tourism, planning of Islamic cities

Procedia PDF Downloads 142
4669 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research

Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova

Abstract:

The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.

Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research

Procedia PDF Downloads 393
4668 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability

Authors: Rui Calejo Rodrigues

Abstract:

Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.

Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation

Procedia PDF Downloads 205
4667 Using the Ecological Analysis Method to Justify the Environmental Feasibility of Biohydrogen Production from Cassava Wastewater Biogas

Authors: Jonni Guiller Madeira, Angel Sanchez Delgado, Ronney Mancebo Boloy

Abstract:

The use bioenergy, in recent years, has become a good alternative to reduce the emission of polluting gases. Several Brazilian and foreign companies are doing studies related to waste management as an essential tool in the search for energy efficiency, taking into consideration, also, the ecological aspect. Brazil is one of the largest cassava producers in the world; the cassava sub-products are the food base of millions of Brazilians. The repertoire of results about the ecological impact of the production, by steam reforming, of biohydrogen from cassava wastewater biogas is very limited because, in general, this commodity is more common in underdeveloped countries. This hydrogen, produced from cassava wastewater, appears as an alternative fuel to fossil fuels since this is a low-cost carbon source. This paper evaluates the environmental impact of biohydrogen production, by steam reforming, from cassava wastewater biogas. The ecological efficiency methodology developed by Cardu and Baica was used as a benchmark in this study. The methodology mainly assesses the emissions of equivalent carbon dioxide (CO₂, SOₓ, CH₄ and particulate matter). As a result, some environmental parameters, such as equivalent carbon dioxide emissions, pollutant indicator, and ecological efficiency are evaluated due to the fact that they are important to energy production. The average values of the environmental parameters among different biogas compositions (different concentrations of methane) were calculated, the average pollution indicator was 10.11 kgCO₂e/kgH₂ with an average ecological efficiency of 93.37%. As a conclusion, bioenergy production using biohydrogen from cassava wastewater treatment plant is a good option from the environmental feasibility point of view. This fact can be justified by the determination of environmental parameters and comparison of the environmental parameters of hydrogen production via steam reforming from different types of fuels.

Keywords: biohydrogen, ecological efficiency, cassava, pollution indicator

Procedia PDF Downloads 199
4666 Changing Misconceptions in Heat Transfer: A Problem Based Learning Approach for Engineering Students

Authors: Paola Utreras, Yazmina Olmos, Loreto Sanhueza

Abstract:

This work has the purpose of study and incorporate Problem Based Learning (PBL) for engineering students, through the analysis of several thermal images of dwellings located in different geographical points of the Region de los Ríos, Chile. The students analyze how heat is transferred in and out of the houses and how is the relation between heat transfer and climatic conditions that affect each zone. As a result of this activity students are able to acquire significant learning in the unit of heat and temperature, and manage to reverse previous conceptual errors related with energy, temperature and heat. In addition, student are able to generate prototype solutions to increase thermal efficiency using low cost materials. Students make public their results in a report using scientific writing standards and in a science fair open to the entire university community. The methodology used to measure previous Conceptual Errors has been applying diagnostic tests with everyday questions that involve concepts of heat, temperature, work and energy, before the unit. After the unit the same evaluation is done in order that themselves are able to evidence the evolution in the construction of knowledge. As a result, we found that in the initial test, 90% of the students showed deficiencies in the concepts previously mentioned, and in the subsequent test 47% showed deficiencies, these percent ages differ between students who carry out the course for the first time and those who have performed this course previously in a traditional way. The methodology used to measure Significant Learning has been by comparing results in subsequent courses of thermodynamics among students who have received problem based learning and those who have received traditional training. We have observe that learning becomes meaningful when applied to the daily lives of students promoting internalization of knowledge and understanding through critical thinking.

Keywords: engineering students, heat flow, problem-based learning, thermal images

Procedia PDF Downloads 232
4665 Investigation of a Natural Convection Heat Sink for LEDs Based on Micro Heat Pipe Array-Rectangular Channel

Authors: Wei Wang, Yaohua Zhao, Yanhua Diao

Abstract:

The exponential growth of the lighting industry has rendered traditional thermal technologies inadequate for addressing the thermal management challenges inherent to high-power light-emitting diode (LED) technology. To enhance the thermal management of LEDs, this study proposes a heat sink configuration that integrates a miniature heat pipe array based on phase change technology with rectangular channels. The thermal performance of the heat sink was evaluated through experimental testing, and the results demonstrated that when the input power was 100W, 150W, and 200W, the temperatures of the LED substrate were 47.64℃, 56.78℃, and 69.06℃, respectively. Additionally, the maximum temperature difference of the MHPA in the vertical direction was observed to be 0.32℃, 0.30℃, and 0.30℃, respectively. The results demonstrate that the heat sink not only effectively dissipates the heat generated by the LEDs, but also exhibits excellent temperature uniformity. In consideration of the experimental measurement outcomes, a corresponding numerical model was developed as part of this study. Following the model validation, the effect of the structural parameters of the heat sink on its heat dissipation efficacy was examined through the use of response surface methodology (RSM) analysis. The rectangular channel width, channel height, channel length, number of channel cross-sections, and channel cross-section spacing were selected as the input parameters, while the LED substrate temperature and the total mass of the heat sink were regarded as the response variables. Subsequently, the response was subjected to an analysis of variance (ANOVA), which yielded a regression model that predicted the response based on the input variables. This offers some direction for the design of the radiator.

Keywords: light-emitting diodes, heat transfer, heat pipe, natural convection, response surface methodology

Procedia PDF Downloads 34
4664 Organizational Innovativeness: Motivation in Employee’s Innovative Work Behaviors

Authors: P. T. Ngan

Abstract:

Purpose: The study aims to answer the question what are motivational conditions that have great influences on employees’ innovative work behaviors by investigating the case of SATAMANKULMA/ Anya Productions Ky in Kuopio, Finland. Design/methodology: The main methodology utilized was the qualitative single case study research, analysis was conducted with an adapted thematic content analysis procedure, created from empirical material that was collected through interviews, observation and document review. Findings: The paper highlights the significance of combining relevant synergistic extrinsic and intrinsic motivations into the organizational motivation system. The findings show that intrinsic drives are essential for the initiation phases while extrinsic drives are more important for the implementation phases of innovative work behaviors. The study also offers the IDEA motivation model-interpersonal relationships & networks, development opportunities, economic constituent and application supports as an ideal tool to optimize business performance. Practical limitations/ implications: The research was only conducted from the perspective of SATAMANKULMA/Anya Productions Ky, with five interviews, a few observations and with several reviewed documents. However, further research is required to include other stakeholders, such as the customers, partner companies etc. Also the study does not offer statistical validity of the findings; an extensive case study or a qualitative multiple case study is suggested to compare the findings and provide information as to whether IDEA model relevant in other types of firms. Originality/value: Neither the innovation nor the human resource management field provides a detailed overview of specific motivational conditions might use to stimulate innovative work behaviors of individual employees. This paper fills that void.

Keywords: employee innovative work behaviors, extrinsic motivation, intrinsic motivation, organizational innovativeness

Procedia PDF Downloads 267
4663 Total Synthesis of Natural Cyclic Depsi Peptides by Convergent SPPS and Macrolactonization Strategy for Anti-Tb Activity

Authors: Katharigatta N. Venugopala, Fernando Albericio, Bander E. Al-Dhubiab, T. Govender

Abstract:

Recent years have witnessed a renaissance in the field of peptides that are obtained from various natural sources such as many bacteria, fungi, plants, seaweeds, vertebrates, invertebrates and have been reported for various pharmacological properties such as anti-TB, anticancer, antimalarial, anti-inflammatory, anti-HIV, antibacterial, antifungal, and antidiabetic, activities. In view of the pharmacological significance of natural peptides, serious research efforts of many scientific groups and pharmaceutical companies have consequently focused on them to explore the possibility of developing their potential analogues as therapeutic agents. Solid phase and solution phase peptide synthesis are the two methodologies currently available for the synthesis of natural or synthetic linear or cyclic depsi-peptides. From a synthetic point of view, there is no doubt that the solid-phase methodology gained added advantages over solution phase methodology in terms of simplicity, purity of the compound and the speed with which peptides can be synthesised. In the present study total synthesis, purification and structural elucidation of analogues of natural anti-TB cyclic depsi-peptides such as depsidomycin, massetolides and viscosin has been attempted by solid phase method using standard Fmoc protocols and finally off resin cyclization in solution phase method. In case of depsidomycin, synthesis of linear peptide on solid phase could not be achieved because of two turn inducing amino acids in the peptide sequence, but total synthesis was achieved by convergent solid phase peptide synthesis followed by cyclization in solution phase method. The title compounds obtained were in good yields and characterized by NMR and HRMS. Anti-TB results revealed that the potential title compound exhibited promising activity at 4 µg/mL against H37Rv and 16 µg/mL against MDR strains of tuberculosis.

Keywords: total synthesis, cyclic depsi-peptides, anti-TB activity, tuberculosis

Procedia PDF Downloads 623
4662 Selection of Social and Sustainability Criteria for Public Investment Project Evaluation in Developing Countries

Authors: Pintip Vajarothai, Saad Al-Jibouri, Johannes I. M. Halman

Abstract:

Public investment projects are primarily aimed at achieving development strategies to increase national economies of scale and overall improvement in a country. However, experience shows that public projects, particularly in developing countries, struggle or fail to fulfill the immediate needs of local communities. In many cases, the reason for that is that projects are selected in a subjective manner and that a major part of the problem is related to the evaluation criteria and techniques used. The evaluation process is often based on a broad strategic economic effects rather than real benefits of projects to society or on the various needs from different levels (e.g. national, regional, local) and conditions (e.g. long-term and short-term requirements). In this paper, an extensive literature review of the types of criteria used in the past by various researchers in project evaluation and selection process is carried out and the effectiveness of such criteria and techniques is discussed. The paper proposes substitute social and project sustainability criteria to improve the conditions of local people and in particular the disadvantaged groups of the communities. Furthermore, it puts forward a way for modelling the interaction between the selected criteria and the achievement of the social goals of the affected community groups. The described work is part of developing a broader decision model for public investment project selection by integrating various aspects and techniques into a practical methodology. The paper uses Thailand as a case to review what and how the various evaluation techniques are currently used and how to improve the project evaluation and selection process related to social and sustainability issues in the country. The paper also uses an example to demonstrates how to test the feasibility of various criteria and how to model the interaction between projects and communities. The proposed model could be applied to other developing and developed countries in the project evaluation and selection process to improve its effectiveness in the long run.

Keywords: evaluation criteria, developing countries, public investment, project selection methodology

Procedia PDF Downloads 275
4661 Standardization of a Methodology for Quantification of Antimicrobials Used for the Treatment of Multi-Resistant Bacteria Using Two Types of Biosensors and Production of Anti-Antimicrobial Antibodies

Authors: Garzon V., Bustos R., Salvador J. P., Marco M. P., Pinacho D. G.

Abstract:

Bacterial resistance to antimicrobial treatment has increased significantly in recent years, making it a public health problem. Large numbers of bacteria are resistant to all or nearly all known antimicrobials, creating the need for the development of new types of antimicrobials or the use of “last line” antimicrobial drug therapies for the treatment of multi-resistant bacteria. Some of the chemical groups of antimicrobials most used for the treatment of infections caused by multiresistant bacteria in the clinic are Glycopeptide (Vancomycin), Polymyxin (Colistin), Lipopeptide (Daptomycin) and Carbapenem (Meropenem). Molecules that require therapeutic drug monitoring (TDM). Due to the above, a methodology based on nanobiotechnology based on an optical and electrochemical biosensor is being developed, which allows the evaluation of the plasmatic levels of some antimicrobials such as glycopeptide, polymyxin, lipopeptide and carbapenem quickly, at a low cost, with a high specificity and sensitivity and that can be implemented in the future in public and private health hospitals. For this, the project was divided into five steps i) Design of specific anti-drug antibodies, produced in rabbits for each of the types of antimicrobials, evaluating the results by means of an immunoassay analysis (ELISA); ii) quantification by means of an electrochemical biosensor that allows quantification with high sensitivity and selectivity of the reference antimicrobials; iii) Comparison of antimicrobial quantification with an optical type biosensor; iv) Validation of the methodologies used with biosensor by means of an immunoassay. Finding as a result that it is possible to quantify antibiotics by means of the optical and electrochemical biosensor at concentrations on average of 1,000ng/mL, the antibodies being sensitive and specific for each of the antibiotic molecules, results that were compared with immunoassays and HPLC chromatography. Thus, contributing to the safe use of these drugs commonly used in clinical practice and new antimicrobial drugs.

Keywords: antibiotics, electrochemical biosensor, optical biosensor, therapeutic drug monitoring

Procedia PDF Downloads 82
4660 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 73
4659 A Mathematical Model to Select Shipbrokers

Authors: Y. Smirlis, G. Koronakos, S. Plitsos

Abstract:

Shipbrokers assist the ship companies in chartering or selling and buying vessels, acting as intermediates between them and the market. They facilitate deals, providing their expertise, negotiating skills, and knowledge about ship market bargains. Their role is very important as it affects the profitability and market position of a shipping company. Due to their significant contribution, the shipping companies have to employ systematic procedures to evaluate the shipbrokers’ services in order to select the best and, consequently, to achieve the best deals. Towards this, in this paper, we consider shipbrokers as financial service providers, and we formulate the problem of evaluating and selecting shipbrokers’ services as a multi-criteria decision making (MCDM) procedure. The proposed methodology comprises a first normalization step to adjust different scales and orientations of the criteria and a second step that includes the mathematical model to evaluate the performance of the shipbrokers’ services involved in the assessment. The criteria along which the shipbrokers are assessed may refer to their size and reputation, the potential efficiency of the services, the terms and conditions imposed, the expenses (e.g., commission – brokerage), the expected time to accomplish a chartering or selling/buying task, etc. and according to our modelling approach these criteria may be assigned different importance. The mathematical programming model performs a comparative assessment and estimates for the shipbrokers involved in the evaluation, a relative score that ranks the shipbrokers in terms of their potential performance. To illustrate the proposed methodology, we present a case study in which a shipping company evaluates and selects the most suitable among a number of sale and purchase (S&P) brokers. Acknowledgment: This study is supported by the OptiShip project, implemented within the framework of the National Recovery Plan and Resilience “Greece 2.0” and funded by the European Union – NextGenerationEU programme.

Keywords: shipbrokers, multi-criteria decision making, mathematical programming, service-provider selection

Procedia PDF Downloads 88