Search results for: modeling methodology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8738

Search results for: modeling methodology

6758 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 254
6757 Developing Research Involving Different Species: Opportunities and Empirical Foundations

Authors: A. V. Varfolomeeva, N. S. Tkachenko, A. G. Tishchenko

Abstract:

The problem of violation of internal validity in studies of psychological structures is considered. The role of epistemological attitudes of researchers in the planning of research within the methodology of the system-evolutionary approach is assessed. Alternative programs of psychological research involving representatives of different biological species are presented. On the example of the results of two research series the variants of solving the problem are discussed.

Keywords: epistemological attitudes, experimental design, validity, psychological structure, learning

Procedia PDF Downloads 104
6756 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 423
6755 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 219
6754 Wear Measurement of Thermomechanical Parameters of the Metal Carbide

Authors: Riad Harouz, Brahim Mahfoud

Abstract:

The threads and the circles on reinforced concrete are obtained by process of hot rolling with pebbles finishers in metal carbide which present a way of rolling around the outside diameter. Our observation is that this throat presents geometrical wear after the end of its cycle determined in tonnage. In our study, we have determined, in a first step, experimentally measurements of the wear in terms of thermo-mechanical parameters (Speed, Load, and Temperature) and the influence of these parameters on the wear. In the second stage, we have developed a mathematical model of lifetime useful for the prognostic of the wear and their changes.

Keywords: lifetime, metal carbides, modeling, thermo-mechanical, wear

Procedia PDF Downloads 292
6753 Using Structural Equation Modeling to Measure the Impact of Young Adult-Dog Personality Characteristics on Dog Walking Behaviours during the COVID-19 Pandemic

Authors: Renata Roma, Christine Tardif-Williams

Abstract:

Engaging in daily walks with a dog (f.e. Canis lupus familiaris) during the COVID-19 pandemic may be linked to feelings of greater social-connectedness and global self-worth, and lower stress after controlling for mental health issues, lack of physical contact with others, and other stressors associated with the current pandemic. Therefore, maintaining a routine of dog walking might mitigate the effects of stressors experienced during the pandemic and promote well-being. However, many dog owners do not walk their dogs for many reasons, which are related to the owner’s and the dog’s personalities. Note that the consistency of certain personality characteristics among dogs demonstrates that it is possible to accurately measure different dimensions of personality in both dogs and their human counterparts. In addition, behavioural ratings (e.g., the dog personality questionnaire - DPQ) are reliable tools to assess the dog’s personality. Clarifying the relevance of personality factors in the context of young adult-dog relationships can shed light on interactional aspects that can potentially foster protective behaviours and promote well-being among young adults during the pandemic. This study examines if and how nine combinations of dog- and young adult-related personality characteristics (e.g., neuroticism-fearfulness) can amplify the influence of personality factors in the context of dog walking during the COVID-19 pandemic. Responses to an online large-scale survey among 440 (389 females; 47 males; 4 nonbinaries, Mage=20.7, SD= 2.13 range=17-25) young adults living with a dog in Canada were analyzed using structural equation modeling (SEM). As extraversion, conscientiousness, and neuroticism, measured through the five-factor model (FFM) inventory, are related to maintaining a routine of physical activities, these dimensions were selected for this analysis. Following an approach successfully adopted in the field of dog-human interactions, the FFM was used as the organizing framework to measure and compare the human’s and the dog’s personality in the context of dog walking. The dog-related personality dimensions activity/excitability, responsiveness to training, and fearful were correlated dimensions captured through DPQ and were added to the analysis. Two questions were used to assess dog walking. The actor-partner interdependence model (APIM) was used to check if the young adult’s responses about the dog were biased; no significant bias was observed. Activity/excitability and responsiveness to training in dogs were greatly associated with dog walking. For young adults, high scores in conscientiousness and extraversion predicted more walks with the dog. Conversely, higher scores in neuroticism predicted less engagement in dog walking. For participants high in conscientiousness, the dog’s responsiveness to training (standardized=0.14, p=0.02) and the dog’s activity/excitability (standardized=0.15, p=0.00) levels moderated dog walking behaviours by promoting more daily walks. These results suggest that some combinations in young adult and dog personality characteristics are associated with greater synergy in the young adult-dog dyad that might amplify the impact of personality factors on young adults’ dog-walking routines. These results can inform programs designed to promote the mental and physical health of young adults during the Covid-19 pandemic by highlighting the impact of synergy and reciprocity in personality characteristics between young adults and dogs.

Keywords: Covid-19 pandemic, dog walking, personality, structural equation modeling, well-being

Procedia PDF Downloads 101
6752 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 50
6751 Comparison of Fundamental Frequency Model and PWM Based Model for UPFC

Authors: S. A. Al-Qallaf, S. A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, dynamic modeling, PWM, fundamental frequency

Procedia PDF Downloads 332
6750 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.

Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture

Procedia PDF Downloads 182
6749 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 75
6748 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 69
6747 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 164
6746 Human Factors Integration of Chemical, Biological, Radiological and Nuclear Response: Systems and Technologies

Authors: Graham Hancox, Saydia Razak, Sue Hignett, Jo Barnes, Jyri Silmari, Florian Kading

Abstract:

In the event of a Chemical, Biological, Radiological and Nuclear (CBRN) incident rapidly gaining, situational awareness is of paramount importance and advanced technologies have an important role to play in improving detection, identification, monitoring (DIM) and patient tracking. Understanding how these advanced technologies can fit into current response systems is essential to ensure they are optimally designed, usable and meet end-users’ needs. For this reason, Human Factors (Ergonomics) methods have been used within an EU Horizon 2020 project (TOXI-Triage) to firstly describe (map) the hierarchical structure in a CBRN response with adapted Accident Map (AcciMap) methodology. Secondly, Hierarchical Task Analysis (HTA) has been used to describe and review the sequence of steps (sub-tasks) in a CBRN scenario response as a task system. HTA methodology was then used to map one advanced technology, ‘Tag and Trace’, which tags an element (people, sample and equipment) with a Near Field Communication (NFC) chip in the Hot Zone to allow tracing of (monitoring), for example casualty progress through the response. This HTA mapping of the Tag and Trace system showed how the provider envisaged the technology being used, allowing for review and fit with the current CBRN response systems. These methodologies have been found to be very effective in promoting and supporting a dialogue between end-users and technology providers. The Human Factors methods have given clear diagrammatic (visual) representations of how providers see their technology being used and how end users would actually use it in the field; allowing for a more user centered approach to the design process. For CBRN events usability is critical as sub-optimum design of technology could add to a responders’ workload in what is already a chaotic, ambiguous and safety critical environment.

Keywords: AcciMap, CBRN, ergonomics, hierarchical task analysis, human factors

Procedia PDF Downloads 197
6745 The Touristic Development of the Archaeological and Heritage Areas in Alexandria City, Egypt

Authors: Salma I. Dwidar, Amal A. Abdelsattar

Abstract:

Alexandria city is one of the greatest cities in the world. It confronted different civilizations throughout the ages due to its special geographical location and climate which left many archaeological areas of great heritage (Ptolemaic, Greek, Romanian, especially sunken monuments, Coptic, Islamic, and finally, the Modern). Also, Alexandria city contains areas with different patterns of urban planning, both Hellenistic and compacted planning which merited the diversity in planning. Despite the magnitude of this city, which contains all the elements of tourism, the city was not included in the tourism map of Egypt properly comparing with similar cities in Egypt. This paper discusses the importance of heritage areas in Alexandria and the relationship between heritage areas and modern buildings. It highlights the absence of a methodology to deal with heritage areas as touristic areas. Also, the paper aims to develop multiple touristic routes to visit archaeological areas and other sights of significance in Alexandria. The research methodology is divided into two main frameworks. The first framework is a historical study of the urban development of Alexandria and the most important remaining monuments throughout the ages, as well as an analytical study of sunken monuments and their importance in increasing the rate of tourism. Moreover, it covers a study of the importance of the Library of Alexandria and its effect on the international focus of the city. The second framework focuses on the proposal of some tourism routes to visit the heritage areas, archaeological monuments, sunken monuments and the sights of Alexandria. The study concludes with the proposal of three tourism routes. The first route, which is the longest one, passes by all the famous monuments of the city as well as its modern sights. The second route passes through the heritage areas, sunken monuments, and Library of Alexandria. The third route includes the sunken monuments and Library of Alexandria. These three tourism routes will ensures the touristic development of the city which leads to the economic growth of the city and the country.

Keywords: archeological buildings, heritage buildings, heritage tourism, planning of Islamic cities

Procedia PDF Downloads 124
6744 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research

Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova

Abstract:

The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.

Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research

Procedia PDF Downloads 380
6743 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability

Authors: Rui Calejo Rodrigues

Abstract:

Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.

Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation

Procedia PDF Downloads 193
6742 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 23
6741 Modeling and Analyzing the WAP Class 2 Wireless Transaction Protocol Using Event-B

Authors: Rajaa Filali, Mohamed Bouhdadi

Abstract:

This paper presents an incremental formal development of the Wireless Transaction Protocol (WTP) in Event-B. WTP is part of the Wireless Application Protocol (WAP) architectures and provides a reliable request-response service. To model and verify the protocol, we use the formal technique Event-B which provides an accessible and rigorous development method. This interaction between modelling and proving reduces the complexity and helps to eliminate misunderstandings, inconsistencies, and specification gaps. As result, verification of WTP allows us to find some deficiencies in the current specification.

Keywords: event-B, wireless transaction protocol, proof obligation, refinement, Rodin, ProB

Procedia PDF Downloads 303
6740 Using the Ecological Analysis Method to Justify the Environmental Feasibility of Biohydrogen Production from Cassava Wastewater Biogas

Authors: Jonni Guiller Madeira, Angel Sanchez Delgado, Ronney Mancebo Boloy

Abstract:

The use bioenergy, in recent years, has become a good alternative to reduce the emission of polluting gases. Several Brazilian and foreign companies are doing studies related to waste management as an essential tool in the search for energy efficiency, taking into consideration, also, the ecological aspect. Brazil is one of the largest cassava producers in the world; the cassava sub-products are the food base of millions of Brazilians. The repertoire of results about the ecological impact of the production, by steam reforming, of biohydrogen from cassava wastewater biogas is very limited because, in general, this commodity is more common in underdeveloped countries. This hydrogen, produced from cassava wastewater, appears as an alternative fuel to fossil fuels since this is a low-cost carbon source. This paper evaluates the environmental impact of biohydrogen production, by steam reforming, from cassava wastewater biogas. The ecological efficiency methodology developed by Cardu and Baica was used as a benchmark in this study. The methodology mainly assesses the emissions of equivalent carbon dioxide (CO₂, SOₓ, CH₄ and particulate matter). As a result, some environmental parameters, such as equivalent carbon dioxide emissions, pollutant indicator, and ecological efficiency are evaluated due to the fact that they are important to energy production. The average values of the environmental parameters among different biogas compositions (different concentrations of methane) were calculated, the average pollution indicator was 10.11 kgCO₂e/kgH₂ with an average ecological efficiency of 93.37%. As a conclusion, bioenergy production using biohydrogen from cassava wastewater treatment plant is a good option from the environmental feasibility point of view. This fact can be justified by the determination of environmental parameters and comparison of the environmental parameters of hydrogen production via steam reforming from different types of fuels.

Keywords: biohydrogen, ecological efficiency, cassava, pollution indicator

Procedia PDF Downloads 186
6739 Fracture And Fatigue Crack Growth Analysis and Modeling

Authors: Volkmar Nolting

Abstract:

Fatigue crack growth prediction has become an important topic in both engineering and non-destructive evaluation. Crack propagation is influenced by the mechanical properties of the material and is conveniently modelled by the Paris-Erdogan equation. The critical crack size and the total number of load cycles are calculated. From a Larson-Miller plot the maximum operational temperature can for a given stress level be determined so that failure does not occur within a given time interval t. The study is used to determine a reasonable inspection cycle and thus enhances operational safety and reduces costs.

Keywords: fracturemechanics, crack growth prediction, lifetime of a component, structural health monitoring

Procedia PDF Downloads 24
6738 Changing Misconceptions in Heat Transfer: A Problem Based Learning Approach for Engineering Students

Authors: Paola Utreras, Yazmina Olmos, Loreto Sanhueza

Abstract:

This work has the purpose of study and incorporate Problem Based Learning (PBL) for engineering students, through the analysis of several thermal images of dwellings located in different geographical points of the Region de los Ríos, Chile. The students analyze how heat is transferred in and out of the houses and how is the relation between heat transfer and climatic conditions that affect each zone. As a result of this activity students are able to acquire significant learning in the unit of heat and temperature, and manage to reverse previous conceptual errors related with energy, temperature and heat. In addition, student are able to generate prototype solutions to increase thermal efficiency using low cost materials. Students make public their results in a report using scientific writing standards and in a science fair open to the entire university community. The methodology used to measure previous Conceptual Errors has been applying diagnostic tests with everyday questions that involve concepts of heat, temperature, work and energy, before the unit. After the unit the same evaluation is done in order that themselves are able to evidence the evolution in the construction of knowledge. As a result, we found that in the initial test, 90% of the students showed deficiencies in the concepts previously mentioned, and in the subsequent test 47% showed deficiencies, these percent ages differ between students who carry out the course for the first time and those who have performed this course previously in a traditional way. The methodology used to measure Significant Learning has been by comparing results in subsequent courses of thermodynamics among students who have received problem based learning and those who have received traditional training. We have observe that learning becomes meaningful when applied to the daily lives of students promoting internalization of knowledge and understanding through critical thinking.

Keywords: engineering students, heat flow, problem-based learning, thermal images

Procedia PDF Downloads 217
6737 Literature Review and Approach for the Use of Digital Factory Models in an Augmented Reality Application for Decision Making in Restructuring Processes

Authors: Rene Hellmuth, Jorg Frohnmayer

Abstract:

The requirements of the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Even today, the methods and process models used in factory planning are predominantly based on the classical planning principles of Schmigalla, Aggteleky and Kettner, which, however, are not specifically designed for reorganization. In addition, they are designed for a largely static environmental situation and a manageable planning complexity as well as for medium to long-term planning cycles with a low variability of the factory. Existing approaches already regard factory planning as a continuous process that makes it possible to react quickly to adaptation requirements. However, digital factory models are not yet used as a source of information for building data. Approaches which consider building information modeling (BIM) or digital factory models in general either do not refer to factory conversions or do not yet go beyond a concept. This deficit can be further substantiated. A method for factory conversion planning using a current digital building model is lacking. A corresponding approach must take into account both the existing approaches to factory planning and the use of digital factory models in practice. A literature review will be conducted first. In it, approaches to classic factory planning and approaches to conversion planning are examined. In addition, it will be investigated which approaches already contain digital factory models. In the second step, an approach is presented how digital factory models based on building information modeling can be used as a basis for augmented reality tablet applications. This application is suitable for construction sites and provides information on the costs and time required for conversion variants. Thus a fast decision making is supported. In summary, the paper provides an overview of existing factory planning approaches and critically examines the use of digital tools. Based on this preliminary work, an approach is presented, which suggests the sensible use of digital factory models for decision support in the case of conversion variants of the factory building. The augmented reality application is designed to summarize the most important information for decision-makers during a reconstruction process.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 126
6736 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 134
6735 Organizational Innovativeness: Motivation in Employee’s Innovative Work Behaviors

Authors: P. T. Ngan

Abstract:

Purpose: The study aims to answer the question what are motivational conditions that have great influences on employees’ innovative work behaviors by investigating the case of SATAMANKULMA/ Anya Productions Ky in Kuopio, Finland. Design/methodology: The main methodology utilized was the qualitative single case study research, analysis was conducted with an adapted thematic content analysis procedure, created from empirical material that was collected through interviews, observation and document review. Findings: The paper highlights the significance of combining relevant synergistic extrinsic and intrinsic motivations into the organizational motivation system. The findings show that intrinsic drives are essential for the initiation phases while extrinsic drives are more important for the implementation phases of innovative work behaviors. The study also offers the IDEA motivation model-interpersonal relationships & networks, development opportunities, economic constituent and application supports as an ideal tool to optimize business performance. Practical limitations/ implications: The research was only conducted from the perspective of SATAMANKULMA/Anya Productions Ky, with five interviews, a few observations and with several reviewed documents. However, further research is required to include other stakeholders, such as the customers, partner companies etc. Also the study does not offer statistical validity of the findings; an extensive case study or a qualitative multiple case study is suggested to compare the findings and provide information as to whether IDEA model relevant in other types of firms. Originality/value: Neither the innovation nor the human resource management field provides a detailed overview of specific motivational conditions might use to stimulate innovative work behaviors of individual employees. This paper fills that void.

Keywords: employee innovative work behaviors, extrinsic motivation, intrinsic motivation, organizational innovativeness

Procedia PDF Downloads 253
6734 Total Synthesis of Natural Cyclic Depsi Peptides by Convergent SPPS and Macrolactonization Strategy for Anti-Tb Activity

Authors: Katharigatta N. Venugopala, Fernando Albericio, Bander E. Al-Dhubiab, T. Govender

Abstract:

Recent years have witnessed a renaissance in the field of peptides that are obtained from various natural sources such as many bacteria, fungi, plants, seaweeds, vertebrates, invertebrates and have been reported for various pharmacological properties such as anti-TB, anticancer, antimalarial, anti-inflammatory, anti-HIV, antibacterial, antifungal, and antidiabetic, activities. In view of the pharmacological significance of natural peptides, serious research efforts of many scientific groups and pharmaceutical companies have consequently focused on them to explore the possibility of developing their potential analogues as therapeutic agents. Solid phase and solution phase peptide synthesis are the two methodologies currently available for the synthesis of natural or synthetic linear or cyclic depsi-peptides. From a synthetic point of view, there is no doubt that the solid-phase methodology gained added advantages over solution phase methodology in terms of simplicity, purity of the compound and the speed with which peptides can be synthesised. In the present study total synthesis, purification and structural elucidation of analogues of natural anti-TB cyclic depsi-peptides such as depsidomycin, massetolides and viscosin has been attempted by solid phase method using standard Fmoc protocols and finally off resin cyclization in solution phase method. In case of depsidomycin, synthesis of linear peptide on solid phase could not be achieved because of two turn inducing amino acids in the peptide sequence, but total synthesis was achieved by convergent solid phase peptide synthesis followed by cyclization in solution phase method. The title compounds obtained were in good yields and characterized by NMR and HRMS. Anti-TB results revealed that the potential title compound exhibited promising activity at 4 µg/mL against H37Rv and 16 µg/mL against MDR strains of tuberculosis.

Keywords: total synthesis, cyclic depsi-peptides, anti-TB activity, tuberculosis

Procedia PDF Downloads 609
6733 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 265
6732 Selection of Social and Sustainability Criteria for Public Investment Project Evaluation in Developing Countries

Authors: Pintip Vajarothai, Saad Al-Jibouri, Johannes I. M. Halman

Abstract:

Public investment projects are primarily aimed at achieving development strategies to increase national economies of scale and overall improvement in a country. However, experience shows that public projects, particularly in developing countries, struggle or fail to fulfill the immediate needs of local communities. In many cases, the reason for that is that projects are selected in a subjective manner and that a major part of the problem is related to the evaluation criteria and techniques used. The evaluation process is often based on a broad strategic economic effects rather than real benefits of projects to society or on the various needs from different levels (e.g. national, regional, local) and conditions (e.g. long-term and short-term requirements). In this paper, an extensive literature review of the types of criteria used in the past by various researchers in project evaluation and selection process is carried out and the effectiveness of such criteria and techniques is discussed. The paper proposes substitute social and project sustainability criteria to improve the conditions of local people and in particular the disadvantaged groups of the communities. Furthermore, it puts forward a way for modelling the interaction between the selected criteria and the achievement of the social goals of the affected community groups. The described work is part of developing a broader decision model for public investment project selection by integrating various aspects and techniques into a practical methodology. The paper uses Thailand as a case to review what and how the various evaluation techniques are currently used and how to improve the project evaluation and selection process related to social and sustainability issues in the country. The paper also uses an example to demonstrates how to test the feasibility of various criteria and how to model the interaction between projects and communities. The proposed model could be applied to other developing and developed countries in the project evaluation and selection process to improve its effectiveness in the long run.

Keywords: evaluation criteria, developing countries, public investment, project selection methodology

Procedia PDF Downloads 261
6731 Standardization of a Methodology for Quantification of Antimicrobials Used for the Treatment of Multi-Resistant Bacteria Using Two Types of Biosensors and Production of Anti-Antimicrobial Antibodies

Authors: Garzon V., Bustos R., Salvador J. P., Marco M. P., Pinacho D. G.

Abstract:

Bacterial resistance to antimicrobial treatment has increased significantly in recent years, making it a public health problem. Large numbers of bacteria are resistant to all or nearly all known antimicrobials, creating the need for the development of new types of antimicrobials or the use of “last line” antimicrobial drug therapies for the treatment of multi-resistant bacteria. Some of the chemical groups of antimicrobials most used for the treatment of infections caused by multiresistant bacteria in the clinic are Glycopeptide (Vancomycin), Polymyxin (Colistin), Lipopeptide (Daptomycin) and Carbapenem (Meropenem). Molecules that require therapeutic drug monitoring (TDM). Due to the above, a methodology based on nanobiotechnology based on an optical and electrochemical biosensor is being developed, which allows the evaluation of the plasmatic levels of some antimicrobials such as glycopeptide, polymyxin, lipopeptide and carbapenem quickly, at a low cost, with a high specificity and sensitivity and that can be implemented in the future in public and private health hospitals. For this, the project was divided into five steps i) Design of specific anti-drug antibodies, produced in rabbits for each of the types of antimicrobials, evaluating the results by means of an immunoassay analysis (ELISA); ii) quantification by means of an electrochemical biosensor that allows quantification with high sensitivity and selectivity of the reference antimicrobials; iii) Comparison of antimicrobial quantification with an optical type biosensor; iv) Validation of the methodologies used with biosensor by means of an immunoassay. Finding as a result that it is possible to quantify antibiotics by means of the optical and electrochemical biosensor at concentrations on average of 1,000ng/mL, the antibodies being sensitive and specific for each of the antibiotic molecules, results that were compared with immunoassays and HPLC chromatography. Thus, contributing to the safe use of these drugs commonly used in clinical practice and new antimicrobial drugs.

Keywords: antibiotics, electrochemical biosensor, optical biosensor, therapeutic drug monitoring

Procedia PDF Downloads 65
6730 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 50
6729 A Mathematical Model to Select Shipbrokers

Authors: Y. Smirlis, G. Koronakos, S. Plitsos

Abstract:

Shipbrokers assist the ship companies in chartering or selling and buying vessels, acting as intermediates between them and the market. They facilitate deals, providing their expertise, negotiating skills, and knowledge about ship market bargains. Their role is very important as it affects the profitability and market position of a shipping company. Due to their significant contribution, the shipping companies have to employ systematic procedures to evaluate the shipbrokers’ services in order to select the best and, consequently, to achieve the best deals. Towards this, in this paper, we consider shipbrokers as financial service providers, and we formulate the problem of evaluating and selecting shipbrokers’ services as a multi-criteria decision making (MCDM) procedure. The proposed methodology comprises a first normalization step to adjust different scales and orientations of the criteria and a second step that includes the mathematical model to evaluate the performance of the shipbrokers’ services involved in the assessment. The criteria along which the shipbrokers are assessed may refer to their size and reputation, the potential efficiency of the services, the terms and conditions imposed, the expenses (e.g., commission – brokerage), the expected time to accomplish a chartering or selling/buying task, etc. and according to our modelling approach these criteria may be assigned different importance. The mathematical programming model performs a comparative assessment and estimates for the shipbrokers involved in the evaluation, a relative score that ranks the shipbrokers in terms of their potential performance. To illustrate the proposed methodology, we present a case study in which a shipping company evaluates and selects the most suitable among a number of sale and purchase (S&P) brokers. Acknowledgment: This study is supported by the OptiShip project, implemented within the framework of the National Recovery Plan and Resilience “Greece 2.0” and funded by the European Union – NextGenerationEU programme.

Keywords: shipbrokers, multi-criteria decision making, mathematical programming, service-provider selection

Procedia PDF Downloads 67