Search results for: actual purchase
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2033

Search results for: actual purchase

353 Intersections and Cultural Landscape Interpretation, in the Case of Ancient Messene in the Peloponnese

Authors: E. Maistrou, P. Themelis, D. Kosmopoulos, K. Boulougoura, A. M. Konidi, K. Moretti

Abstract:

InterArch is an ongoing research project that is running since September 2020 and aims to propose a digital application for the enhancement of the cultural landscape, which emphasizes the contribution of physical space and time in digital data organization. The research case study refers to Ancient Messene in the Peloponnese, one of the most important archaeological sites in Greece. The project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while, at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user to multiple semantic interpretations. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. New technologies and the integration of digital data enable the implementation of non‐linear narratives related to the representational characteristics of the art of collage. Various images (photographs, drawings, etc.) and sounds (narrations, music, soundscapes, audio signs, etc.) could be presented according to our proposal through new semiotics of augmented and virtual reality technologies applied in touch screens and smartphones. Despite the fragmentation of tangible or intangible references, material landscape formations, including archaeological remains, constitute the common ground that can inspire cultural narratives in a process that unfolds personal perceptions and collective imaginaries. It is in this context that cultural landscape may be considered an indication of space and historical continuity. It is in this context that history could emerge, according to our proposal, not solely as a previous inscription but also as an actual happening. As a rhythm of occurrences suggesting mnemonic references and, moreover, evolving history projected on the contemporary ongoing cultural landscape.

Keywords: cultural heritage, digital data, landscape, archaeological sites, visitors’ itineraries

Procedia PDF Downloads 78
352 Flow Boiling Heat Transfer at Low Mass and Heat Fluxes: Heat Transfer Coefficient, Flow Pattern Analysis and Correlation Assessment

Authors: Ernest Gyan Bediako, Petra Dancova, Tomas Vit

Abstract:

Flow boiling heat transfer remains an important area of research due to its relevance in thermal management systems and other applications. Despite the enormous work done in the field of flow boiling heat transfer over the years to understand how flow parameters such as mass flux, heat flux, saturation conditions and tube geometries influence the characteristics of flow boiling heat transfer, there are still many contradictions and lack of agreement on the actual mechanisms controlling heat transfer and how flow parameters impact the heat transfer. This work thus seeks to experimentally investigate the heat transfer characteristics and flow patterns at low mass fluxes, low heat fluxes and low saturation pressure conditions which are of less attention in literature but prevalent in refrigeration, air-conditioning and heat pump applications. In this study, flow boiling experiment was conducted for R134a working fluid in a 5 mm internal diameter stainless steel horizontal smooth tube with mass flux ranging from 80- 100 kg/m2 s, heat fluxes ranging from 3.55kW/m2 - 25.23 kW/m2 and saturation pressure of 460 kPa. Vapor quality ranged from 0 to 1. A well-known flow pattern map created by Wojtan et al. was used to predict the flow patterns noticed during the study. The experimental results were correlated with well-known flow boiling heat transfer correlations in literature. The findings show that, heat transfer coefficient was influenced by both mass flux and heat fluxes. However, for an increasing heat flux, nucleate boiling was observed to be the dominant mechanism controlling the heat transfer especially at low vapor quality region. For an increasing mass flux, convective boiling was the dominant mechanism controlling the heat transfer especially in the high vapor quality region. Also, the study observed an unusual high heat transfer coefficient at low vapor qualities which could be due to periodic wetting of the walls of the tube due to slug flow pattern and stratified wavy flow patterns. The flow patterns predicted by Wojtan et al. flow pattern map were mixture of slug and stratified wavy, purely stratified wavy and dry out. Statistical assessment of the experimental data with various well-known correlations from literature showed that, none of the correlations reported in literature could predicted the experimental data with enough accuracy.

Keywords: flow boiling, heat transfer coefficient, mass flux, heat flux.

Procedia PDF Downloads 114
351 Emigration Improves Life Standard of Families Left Behind: An Evidence from Rural Area of Gujrat-Pakistan

Authors: Shoaib Rasool

Abstract:

Migration trends in rural areas of Gujrat are increasing day by day among illiterate people as they consider it as a source of attraction and charm of destination. It affects the life standard both positive and negative way to their families left behind in the context of poverty, socio-economic status and life standards. It also promotes material items and as well as social indicators of living, housing conditions, schooling of their children’s, health seeking behavior and to some extent their family environment. The nature of the present study is to analyze socio-economic conditions regarding life standard of emigrant families left behind in rural areas of Gujrat district, Pakistan. A survey design was used on 150 families selected from rural areas of Gujrat districts through purposive sampling technique. A well-structured questionnaire was administered by the researcher to explore the nature of the study and for further data collection process. The measurement tool was pretested on 20 families to check the workability and reliability before the actual data collection. Statistical tests were applied to draw results and conclusion. The preliminary findings of the study show that emigration has left deep social-economic impacts on life standards of rural families left behind in Gujrat. They improved their life status and living standard through remittances. Emigration is one of the major sources of development of economy of household and it also alleviate poverty at house household level as well as community and country level. The rationale behind migration varies individually and geographically. There are popular considered attractions in Pakistan includes securing high status, improvement in health condition, coping other, getting married then to acquire nationality, using the unfair means, opting educational visas etc. Emigrants are not only sending remittances but also returning with newly acquired skills and valuable knowledge to their country of origin because emigrants learn new methods of living and working. There are also women migrants who experience social downward mobility by engaging in jobs that are beneath their educational qualifications.

Keywords: emigration, life standard, families, left behind, rural area, Gujrat

Procedia PDF Downloads 438
350 Micelles Made of Pseudo-Proteins for Solubilization of Hydrophobic Biologicals

Authors: Sophio Kobauri, David Tugushi, Vladimir P. Torchilin, Ramaz Katsarava

Abstract:

Hydrophobic / hydrophilically modified functional polymers are of high interest in modern biomedicine due to their ability to solubilize water-insoluble / poorly soluble (hydrophobic) drugs. Among the many approaches that are being developed in this direction, one of the most effective methods is the use of polymeric micelles (PMs) (micelles formed by amphiphilic block-copolymers) for solubilization of hydrophobic biologicals. For therapeutic purposes, PMs are required to be stable and biodegradable, although quite a few amphiphilic block-copolymers are described capable of forming stable micelles with good solubilization properties. For obtaining micelle-forming block-copolymers, polyethylene glycol (PEG) derivatives are desirable to use as hydrophilic shell because it represents the most popular biocompatible hydrophilic block and various hydrophobic blocks (polymers) can be attached to it. Although the construction of the hydrophobic core, due to the complex requirements and micelles structure development, is the very actual and the main problem for nanobioengineers. Considering the above, our research goal was obtaining biodegradable micelles for the solubilization of hydrophobic drugs and biologicals. For this purpose, we used biodegradable polymers– pseudo-proteins (PPs)(synthesized with naturally occurring amino acids and other non-toxic building blocks, such as fatty diols and dicarboxylic acids) as hydrophobic core since these polymers showed reasonable biodegradation rates and excellent biocompatibility. In the present study, we used the hydrophobic amino acid – L-phenylalanine (MW 4000-8000Da) instead of L-leucine. Amino-PEG (MW 2000Da) was used as hydrophilic fragments for constructing the suitable micelles. The molecular weight of PP (the hydrophobic core of micelle) was regulated by variation of used monomers ratios. Micelles were obtained by dissolving of synthesized amphiphilic polymer in water. The micelle-forming property was tested using dynamic light scattering (Malvern zetasizer NanoZSZEN3600). The study showed that obtaining amphiphilic block-copolymer form stable neutral micelles 100 ± 7 nm in size at 10mg/mL concentration, which is considered as an optimal range for pharmaceutical micelles. The obtained preliminary data allow us to conclude that the obtained micelles are suitable for the delivery of poorly water-soluble drugs and biologicals.

Keywords: amino acid – L-phenylalanine, pseudo-proteins, amphiphilic block-copolymers, biodegradable micelles

Procedia PDF Downloads 132
349 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 129
348 Finding the Right Regulatory Path for Islamic Banking

Authors: Meysam Saidi

Abstract:

While the specific externalities and required regulatory measures in relation to Islamic banking are fairly uncertain, the business is growing across the world. Unofficial data indicate that the Islamic Finance market is growing with annual rate of 15% and it has reached 1.3 $ trillion size. This trend is associated with inherent systematic connection of Islamic financial institutions to other entities and different sectors of economies. Islamic banking has been subject of market development policies in major economies, most notably the UK. This trend highlights the need for identification of distinct risk features of Islamic banking and crafting customized regulatory measures. So far there has not been a significant systemic crisis in this market which can be attributed to its distinct nature. However, the significant growth and spread of its products worldwide necessitate an in depth study of its nature for customized congruent regulatory measures. In the post financial crisis era some market analysis and reports suggested that the Islamic banks fairly weathered the crisis. As far as heavily blamed conventional financial products such as subprime mortgage backed securities and speculative credit default swaps were concerned the immunity claim can be considered true, as Islamic financial institutions were not directly exposed to such products. Nevertheless, similar to the experience of the conventional banking industry, it can be only a matter of time for Islamic banks to face failures that can be specific to the nature of their business. Using the experience of conventional banking regulations and identifying those peculiarities of Islamic banking that need customized regulatory approach can aid to prevent major failures. Frank Knight has stated that “We perceive the world before we react to it, and we react not to what we perceive, but always to what we infer”. The debate over congruent Islamic banking regulations might not be an exception to Frank Knight’s statement but I will try to base my discussion on concrete evidences. This paper first analyzes both theoretical and actual features of Islamic banking in order to ascertain to its peculiarities in terms of market stability and other externalities. Next, the paper discusses distinct features of Islamic financial transactions and banking which might require customized regulatory measures. Finally, the paper explores how a more transparent path for the Islamic banking regulations can be drawn.

Keywords: Islamic banking, regulation, risks, capital requirements, customer protection, financial stability

Procedia PDF Downloads 404
347 Performance Evaluation of On-Site Sewage Treatment System (Johkasou)

Authors: Aashutosh Garg, Ankur Rajpal, A. A. Kazmi

Abstract:

The efficiency of an on-site wastewater treatment system named Johkasou was evaluated based on its pollutant removal efficiency over 10 months. This system was installed at IIT Roorkee and had a capacity of treating 7 m3/d of sewage water, sufficient for a group of 30-50 people. This system was fed with actual wastewater through an equalization tank to eliminate the fluctuations throughout the day. Methanol and ammonium chloride was added into this equalization tank to increase the Chemical Oxygen Demand (COD) and ammonia content of the influent. The outlet from Johkasou is sent to a tertiary unit consisting of a Pressure Sand Filter and an Activated Carbon Filter for further treatment. Samples were collected on alternate days from Monday to Friday and the following parameters were evaluated: Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN). The Average removal efficiency for Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN) was observed as 89.6, 97.7, 96, and 80% respectively. The cost of treating the wastewater comes out to be Rs 23/m3 which includes electricity, cleaning and maintenance, chemical, and desludging costs. Tests for the coliforms were also performed and it was observed that the removal efficiency for total and fecal coliforms was 100%. The sludge generation rate is approximately 20% of the BOD removal and it needed to be removed twice a year. It also showed a very good response against the hydraulic shock load. We performed vacation stress analysis on the system to evaluate the performance of the system when there is no influent for 8 consecutive days. From the result of stress analysis, we concluded that system needs a recovery time of about 48 hours to stabilize. After about 2 days, the system returns again to original conditions and all the parameters in the effluent become within the limits of National Green Tribunal (NGT) standards. We also performed another stress analysis to save the electricity in which we turned the main aeration blower off for 2 to 12 hrs a day and the results showed that we can turn the blower off for about 4-6 hrs a day and this will help in reducing the electricity costs by about 25%. It was concluded that the Johkasou system can remove a sufficient amount of all the physiochemical parameters tested to satisfy the prescribed limit set as per Indian Standard.

Keywords: on-site treatment, domestic wastewater, Johkasou, nutrient removal, pathogens removal

Procedia PDF Downloads 110
346 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 275
345 Observing the Observers: Journalism and the Gendered Newsroom

Authors: M. Silveirinha, P. Lobo

Abstract:

In the last few decades, many studies have documented a systematic under-representation of women in the news. Aside from being fewer than men, research has also shown that they are frequently portrayed according to traditional stereotypes that have been proven to be disadvantageous for women. When considering this problem, it has often been argued that news content will be more gender balanced when the number of female journalists increases. However, the recent so-called ‘feminization’ of media professions has shown that this assumption is too simplistic. If we want to better grasp gender biases in news content we will need to take a deeper approach into the processes of news production and into journalism culture itself, taking the study of newsmaking as a starting point and theoretical framework, with the purpose of examining the actual newsroom routines, professional values, structures and news access that eventually lead to an unbalanced media representation of women. If journalists consider themselves to be observers of everyday social and political life, of specific importance, as a vast body of research shows, is the observation of women journalist’s believes and of their roles and practices in a gendered newsroom. In order to better understand the professional and organizational context of news production, and the gender power relations in decision-making processes, we conducted a participant observation in two television newsrooms. Our approach involved a combination of methods, including overt observation itself, formal and informal interviews and the writing-up and analysis of our own diaries. Drawing insights in organizational sociology, we took newsroom practices to be a result of professional routines and socialization and focused on how women and men respond to newsroom dynamics and structures. We also analyzed the gendered organization of the newsmaking process and the subtle and/or obvious glass-ceiling obstacles often reported on. In our paper we address two levels of research: first, we look at our results and establish an overview of the patterns of continuity between the gendering of organizations, working conditions and professional journalist beliefs. At this level, the study not only interrogates how journalists handle views on gender and the practice of the profession but also highlights the structural inequalities in journalism and the pervasiveness of family–work tensions for female journalists. Secondly, we reflect on our observation method, and establish a critical assessment of the method itself.

Keywords: gender, journalism, participant observation, women

Procedia PDF Downloads 351
344 Optimizing Electric Vehicle Charging Networks with Dynamic Pricing and Demand Elasticity

Authors: Chiao-Yi Chen, Dung-Ying Lin

Abstract:

With the growing awareness of environmental protection and the implementation of government carbon reduction policies, the number of electric vehicles (EVs) has rapidly increased, leading to a surge in charging demand and imposing significant challenges on the existing power grid’s capacity. Traditional urban power grid planning has not adequately accounted for the additional load generated by EV charging, which often strains the infrastructure. This study aims to optimize grid operation and load management by dynamically adjusting EV charging prices based on real-time electricity supply and demand, leveraging consumer demand elasticity to enhance system efficiency. This study uniquely addresses the intricate interplay between urban traffic patterns and power grid dynamics in the context of electric vehicle (EV) adoption. By integrating Hsinchu City's road network with the IEEE 33-bus system, the research creates a comprehensive model that captures both the spatial and temporal aspects of EV charging demand. This approach allows for a nuanced analysis of how traffic flow directly influences the load distribution across the power grid. The strategic placement of charging stations at key nodes within the IEEE 33-bus system, informed by actual road traffic data, enables a realistic simulation of the dynamic relationship between vehicle movement and energy consumption. This integration of transportation and energy systems provides a holistic view of the challenges and opportunities in urban EV infrastructure planning, highlighting the critical need for solutions that can adapt to the ever-changing interplay between traffic patterns and grid capacity. The proposed dynamic pricing strategy effectively reduces peak charging loads, enhances the operational efficiency of charging stations, and maximizes operator profits, all while ensuring grid stability. These findings provide practical insights and a valuable framework for optimizing EV charging infrastructure and policies in future smart cities, contributing to more resilient and sustainable urban energy systems.

Keywords: dynamic pricing, demand elasticity, EV charging, grid load balancing, optimization

Procedia PDF Downloads 11
343 Gabriel Marcel and Friedrich Nietzsche: Existence and Death of God

Authors: Paolo Scolari

Abstract:

Nietzschean thought flows like a current throughout Marcel’s philosophy. Marcel is in constant dialogue with him. He wants to give homage to him, making him one of the most eminent representatives of existential thought. His enthusiasm is triggered by Nietzsche’s phrase: ‘God is dead,’ the fil rouge that ties all of the Nietzschean references scattered through marcelian texts. The death of God is the theme which emphasises both the greatness and simultaneously the tragedy of Nietzsche. Marcel wants to substitute the idea ‘God is dead’ with its original meaning: a tragic existential characteristic that imitators of Nietzsche seemed to have blurred. An interpretation that Marcel achieves aiming at double target. On the one hand he removes the heavy metaphysical suit from Nietzsche’s aphorisms on the death of God, that his interpreters have made them wear – Heidegger especially. On the other hand, he removes a stratus of trivialisation which takes the aphorisms out of context and transforms them into advertising slogans – here Sartre becomes the target. In the lecture: Nietzsche: l'homme devant la mort de dieu, Marcel hurls himself against the metaphysical Heidegger interpretation of the death of God. A hermeneutical proposal definitely original, but also a bit too abstract. An interpretation without bite, that does not grasp the tragic existential weight of the original Nietzschean idea. ‘We are probably on the wrong road,’ announces, ‘when at all costs, like Heidegger, we want to make a metaphysic out of Nietzsche.’ Marcel also criticizes Sartre. He lands in Geneva and reacts to the journalists, by saying: ‘Gentlemen, God is dead’. Marcel only needs this impromptu exclamation to understand how Sartre misinterprets the meaning of the death of God. Sartre mistakes and loses the existential sense of this idea in favour of the sensational and trivialisation of it. Marcel then wipes the slate clean from these two limited interpretations of the declaration of the death of God. This is much more than a metaphysical quarrel and not at all comparable to any advertising slogan. Behind the cry ‘God is dead’ there is the existence of an anguished man who experiences in his solitude the actual death of God. A man who has killed God with his own hands, haunted by the chill that from now on he will have to live in a completely different way. The death of God, however, is not the end. Marcel spots a new beginning at the point in which nihilism is overcome and the Übermensch is born. Dialoguing with Nietzsche he notices to being in the presence of a great spirit that has contributed to the renewal of a spiritual horizon. He descends to the most profound depths of his thought, aware that the way out is really far below, in the remotest areas of existence. The ambivalence of Nietzsche does not scare him. Rather such a thought, characterised by contradiction, will simultaneously be infinitely dangerous and infinitely healthy.

Keywords: Nietzsche's Death of God, Gabriel Marcel, Heidegger, Sartre

Procedia PDF Downloads 231
342 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism

Authors: Gobinathan Devathasan, Kezia Devathasan

Abstract:

Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.

Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech

Procedia PDF Downloads 188
341 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China

Authors: Linyao Qiu, Zhiqiang Du

Abstract:

As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.

Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service

Procedia PDF Downloads 293
340 Spatial Suitability Assessment of Onshore Wind Systems Using the Analytic Hierarchy Process

Authors: Ayat-Allah Bouramdane

Abstract:

Since 2010, there have been sustained decreases in the unit costs of onshore wind energy and large increases in its deployment, varying widely across regions. In fact, the onshore wind production is affected by air density— because cold air is more dense and therefore more effective at producing wind power— and by wind speed—as wind turbines cannot operate in very low or extreme stormy winds. The wind speed is essentially affected by the surface friction or the roughness and other topographic features of the land, which slow down winds significantly over the continent. Hence, the identification of the most appropriate locations of onshore wind systems is crucial to maximize their energy output and therefore minimize their Levelized Cost of Electricity (LCOE). This study focuses on the preliminary assessment of onshore wind energy potential, in several areas in Morocco with a particular focus on the Dakhla city, by analyzing the diurnal and seasonal variability of wind speed for different hub heights, the frequency distribution of wind speed, the wind rose and the wind performance indicators such as wind power density, capacity factor, and LCOE. In addition to climate criterion, other criteria (i.e., topography, location, environment) were selected fromGeographic Referenced Information (GRI), reflecting different considerations. The impact of each criterion on the suitability map of onshore wind farms was identified using the Analytic Hierarchy Process (AHP). We find that the majority of suitable zones are located along the Atlantic Ocean and the Mediterranean Sea. We discuss the sensitivity of the onshore wind site suitability to different aspects such as the methodology—by comparing the Multi-Criteria Decision-Making (MCDM)-AHP results to the Mean-Variance Portfolio optimization framework—and the potential impact of climate change on this suitability map, and provide the final recommendations to the Moroccan energy strategy by analyzing if the actual Morocco's onshore wind installations are located within areas deemed suitable. This analysis may serve as a decision-making framework for cost-effective investment in onshore wind power in Morocco and to shape the future sustainable development of the Dakhla city.

Keywords: analytic hierarchy process (ahp), dakhla, geographic referenced information, morocco, multi-criteria decision-making, onshore wind, site suitability.

Procedia PDF Downloads 162
339 Mathematical Modeling for Continuous Reactive Extrusion of Poly Lactic Acid Formation by Ring Opening Polymerization Considering Metal/Organic Catalyst and Alternative Energies

Authors: Satya P. Dubey, Hrushikesh A Abhyankar, Veronica Marchante, James L. Brighton, Björn Bergmann

Abstract:

Aims: To develop a mathematical model that simulates the ROP of PLA taking into account the effect of alternative energy to be implemented in a continuous reactive extrusion production process of PLA. Introduction: The production of large amount of waste is one of the major challenges at the present time, and polymers represent 70% of global waste. PLA has emerged as a promising polymer as it is compostable, biodegradable thermoplastic polymer made from renewable sources. However, the main limitation for the application of PLA is the traces of toxic metal catalyst in the final product. Thus, a safe and efficient production process needs to be developed to avoid the potential hazards and toxicity. It has been found that alternative energy sources (LASER, ultrasounds, microwaves) could be a prominent option to facilitate the ROP of PLA via continuous reactive extrusion. This process may result in complete extraction of the metal catalysts and facilitate less active organic catalysts. Methodology: Initial investigation were performed using the data available in literature for the reaction mechanism of ROP of PLA based on conventional metal catalyst stannous octoate. A mathematical model has been developed by considering significant parameters such as different initial concentration ratio of catalyst, co-catalyst and impurity. Effects of temperature variation and alternative energies have been implemented in the model. Results: The validation of the mathematical model has been made by using data from literature as well as actual experiments. Validation of the model including alternative energies is in progress based on experimental data for partners of the InnoREX project consortium. Conclusion: The model developed reproduces accurately the polymerisation reaction when applying alternative energy. Alternative energies have a great positive effect to increase the conversion and molecular weight of the PLA. This model could be very useful tool to complement Ludovic® software to predict the large scale production process when using reactive extrusion.

Keywords: polymer, poly-lactic acid (PLA), ring opening polymerization (ROP), metal-catalyst, bio-degradable, renewable source, alternative energy (AE)

Procedia PDF Downloads 358
338 Musically Yours: Impact of Social Media Advertisement Music per the Circadian Rhythm

Authors: Payal Bose

Abstract:

The impact of music on consumers' attention and emotions at different parts of the day are rarely/never studied. Music has been widely studied in different parameters, such as in-store music and its atmospheric effects, to understand consumer arousal, in-store traffic, perceptions of visual stimuli, and actual time spent in the store. Further other parameters such as tempo, shopper's age, volume, music preference, and its usage as foreground or background music acting as a mediator and impacting consumer behavior is also well researched. However, no study has traversed the influence of music on social media advertisements and its impact on the consumer mind. Most studies have catered to the influence of music on consumers conscious. A recent study found that playing pleasant music is more effective on weekdays in enhancing supermarkets' sales than on weekends. This led to a more pertinent question about the impact of music on different parts of the day and how it impacts the attention and emotion in the consumers’ mind is an interesting question to be asked given the fact that there is a high usage of social media advertisement consumption in the recent past on a day-to-day basis. This study would help brands on social media to structure their advertisements and engage more consumers towards their products. Prior literature has examined the effects or influence of music on consumers largely in retail, brick-and-mortar format. Hence most of the outcomes are favorable for physical retail environments. However, with the rise of Web 3.0 and social media marketing, it would be interesting to see how consumers' attention and emotion can be studied with the effects of music embedded in an advertisement during different parts of the day. A smartphone is considered a personal gadget, and viewing social media advertisements on them is mostly an intimate experience. Hence in a social media advertisement, most of the viewing happens on a one-on-one basis between the consumer and the brand advertisement. To the best of our knowledge, little or no work has explored the influence of music on different parts of the day (per the circadian rhythm) in advertising research. Previous works on social media advertisement have explored the timing of social media posts, deploying Targeted Content Advertising, appropriate content, reallocation of time, and advertising expenditure. Hence, I propose studying advertisements embedded with music during different parts of the day and its influence on consumers' attention and emotions. To address the research objectives and knowledge gap, it is intended to use a neuroscientific approach using fMRI and eye-tracking. The influence of music embedded in social media advertisement during different parts of the day would be assessed.

Keywords: music, neuromarketing, circadian rhythm, social media, engagement

Procedia PDF Downloads 63
337 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study

Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly

Abstract:

In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.

Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions

Procedia PDF Downloads 80
336 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 165
335 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems

Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos

Abstract:

As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.

Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model

Procedia PDF Downloads 157
334 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 319
333 Maintenance Wrench Time Improvement Project

Authors: Awadh O. Al-Anazi

Abstract:

As part of the organizational needs toward successful maintaining activities, a proper management system need to be put in place, ensuring the effectiveness of maintenance activities. The management system shall clearly describes the process of identifying, prioritizing, planning, scheduling, execution, and providing valuable feedback for all maintenance activities. Completion and accuracy of the system with proper implementation shall provide the organization with a strong platform for effective maintenance activities that are resulted in efficient outcomes toward business success. The purpose of this research was to introduce a practical tool for measuring the maintenance efficiency level within Saudi organizations. A comprehensive study was launched across many maintenance professionals throughout Saudi leading organizations. The study covered five main categories: work process, identification, planning and scheduling, execution, and performance monitoring. Each category was evaluated across many dimensions to determine its current effectiveness through a five-level scale from 'process is not there' to 'mature implementation'. Wide participation was received, responses were analyzed, and the study was concluded by highlighting major gaps and improvement opportunities within Saudi organizations. One effective implementation of the efficiency enhancement efforts was deployed in Saudi Kayan (one of Sabic affiliates). Below details describes the project outcomes: SK overall maintenance wrench time was measured at 20% (on average) from the total daily working time. The assessment indicates the appearance of several organizational gaps, such as a high amount of reactive work, poor coordination and teamwork, Unclear roles and responsibilities, as well as underutilization of resources. Multidiscipline team was assigned to design and implement an appropriate work process that is capable to govern the execution process, improve the maintenance workforce efficiency, and maximize wrench time (targeting > 50%). The enhanced work process was introduced through brainstorming and wide benchmarking, incorporated with a proper change management plan and leadership sponsorship. The project was completed in 2018. Achieved Results: SK WT was improved to 50%, which resulted in 1) reducing the Average Notification completion time. 2) reducing maintenance expenses on OT and manpower support (3.6 MSAR Actual Saving from Budget within 6 months).

Keywords: efficiency, enhancement, maintenance, work force, wrench time

Procedia PDF Downloads 143
332 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions

Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani

Abstract:

Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.

Keywords: curing, durability, maturity, strength

Procedia PDF Downloads 299
331 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling

Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud

Abstract:

Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].

Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy

Procedia PDF Downloads 116
330 The Preliminary Study of the Possible Relationship between Urban Open Space System and Residents' Health Outcome

Authors: Jia-Jin He, Tzu-Yuan Stessa Chao

Abstract:

It is generally accepted that community residents with abundant open space have better health status on average, and thus more and more cities around the world began their pursuit of the greatest possible amount of green space within urban areas through urban planning approach. Nevertheless, only a few studies managed to provide empirical evidence regarding the actual relationship between 'providing' green space and 'improving' human health at city level. There is also lack of evidence of direct positive improvement of health by increasing the amount of green space. For urban planning professional, it is important to understand citizens’ usage behaviour towards green space as a critical evidence for future planning and design strategies. There is a research need to further investigate the amount of green space, user behaviour of green spaces and the health outcome of urban dwellers. To this end, we would like to find out other important factors for urban dwellers’ usage behaviours of green spaces. 'Average green spaces per person' is one of the National well-being Indicators in Taiwan as in many other countries. Through our preliminary research, we collected and analyzed the official data of planned open space coverages, average life expectancy, exercise frequency and obesity ratio in all cities of Taiwan. The study result indicates an interesting finding that Kaohsiung city, the second largest city in Taiwan, tells a completely different story. Citizens in Kaosiung city have more open spaces than any other city through urban planning, yet have relatively unhealthy condition in contrary. Whether it pointed out that the amount of the open spaces per person has would not direct to the health outcome. Therefore, the pre-established view which states that open spaces must have positive effects on human health should be examined more prudently. Hence, this paper intends to explore the relationship between user behaviour of open spaces and citizens’ health conditions by critically analyzing past related literature and collecting selective data from government health database in 2015. We also take Kaohsiung city, as a case study area to conduct statistical analysis first followed by questionnaire survey to gain a better understanding. Finally, we aim to feedback our findings to the current planning system in Taiwan for better health promotion urbanized areas.

Keywords: open spaces, urban planning systems, healthy cities, health outcomes

Procedia PDF Downloads 163
329 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity

Authors: Jelena Radmanovic

Abstract:

On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.

Keywords: Afghanistan, impunity, international criminal court, sanctions, United States

Procedia PDF Downloads 121
328 New Knowledge Co-Creation in Mobile Learning: A Classroom Action Research with Multiple Case Studies Using Mobile Instant Messaging

Authors: Genevieve Lim, Arthur Shelley, Dongcheol Heo

Abstract:

Abstract—Mobile technologies can enhance the learning process as it enables social engagement around concepts beyond the classroom and the curriculum. Early results in this ongoing research is showing that when learning interventions are designed specifically to generate new insights, mobile devices support regulated learning and encourage learners to collaborate, socialize and co-create new knowledge. As students navigate across the space and time boundaries, the fundamental social nature of learning transforms into mobile computer supported collaborative learning (mCSCL). The metacognitive interaction in mCSCL via mobile applications reflects the regulation of learning among the students. These metacognitive experiences whether self-, co- or shared-regulated are significant to the learning outcomes. Despite some insightful empirical studies, there has not yet been significant research that investigates the actual practice and processes of the new knowledge co-creation. This leads to question as to whether mobile learning provides a new channel to leverage learning? Alternatively, does mobile interaction create new types of learning experiences and how do these experiences co-create new knowledge. The purpose of this research is to explore these questions and seek evidence to support one or the other. This paper addresses these questions from the students’ perspective to understand how students interact when constructing knowledge in mCSCL and how students’ self-regulated learning (SRL) strategies support the co-creation of new knowledge in mCSCL. A pilot study has been conducted among international undergraduates to understand students’ perspective of mobile learning and concurrently develops a definition in an appropriate context. Using classroom action research (CAR) with multiple case studies, this study is being carried out in a private university in Thailand to narrow the research gaps in mCSCL and SRL. The findings will allow teachers to see the importance of social interaction for meaningful student engagement and envisage learning outcomes from a knowledge management perspective and what role mobile devices can play in these. The findings will signify important indicators for academics to rethink what is to be learned and how it should be learned. Ultimately, the study will bring new light into the co-creation of new knowledge in a social interactive learning environment and challenges teachers to embrace the 21st century of learning with mobile technologies to deepen and extend learning opportunities.

Keywords: mobile computer supported collaborative learning, mobile instant messaging, mobile learning, new knowledge co-creation, self-regulated learning

Procedia PDF Downloads 230
327 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System

Authors: Zhang Kehan, Du Luona

Abstract:

Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.

Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer

Procedia PDF Downloads 340
326 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications

Authors: Alissa Lefebre

Abstract:

It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.

Keywords: divisional applications, regulatory changes, strategic patenting, EPO

Procedia PDF Downloads 124
325 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 282
324 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 224