Search results for: grid visualization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1487

Search results for: grid visualization

107 Manufacturing the Authenticity of Dokkaebi’s Visual Representation in Tourist Marketing

Authors: Mikyung Bak

Abstract:

The dokkaebi, a beloved icon of Korean culture, is represented as an elf, goblin, monster, dwarf, or any similar creature in different media, such as animated shows, comics, soap operas, and movies. It is often described as a mythical creature with a horn or horns and long teeth, wearing tiger-skin pants or a grass skirt, and carrying a magic stick. Many Korean researchers agree on the similarity of the image of the Korean dokkaebi with that of the Japanese oni, a view that is regard as negative from an anti-colonial or nationalistic standpoint. They cite such similarity between the two mythical creatures as evidence that Japanese colonialism persists in Korea. The debate on the originality of dokkaebi’s visual representation is an issue that must be addressed urgently. This research demonstrates through a diagram the plurality of interpretations of dokkaebi’s visual representations in what are considered ‘authentic’ images of dokkaebi in Korean art and culture. This diagram presents the opinions of four major groups in the debate, namely, the scholars of Korean literature and folklore, art historians, authors, and artists. It also shows the creation of new dokkaebi visual representations in popular media, including those influenced by the debate. The diagram further proves that dokkaebi’s representations varied, which include the typical persons or invisible characters found in Korean literature, original Korean folk characters in traditional art, and even universal spirit characters. They are also visually represented by completely new creatures as well as oni-based mythical beings and the actual oni itself. The earlier dokkaebi representations were driven by the creation of a national ideology or national cultural paradigm and, thus, were more uniform and protected. In contrast, the more recent representations are influenced by the Korean industrial strategy of ‘cultural economics,’ which is concerned with the international rather than the domestic market. This recent Korean cultural strategy emphasizes diversity and commonality with the global culture rather than originality and locality. It employs traditional cultural resources to construct a global image. Consequently, dokkaebi’s recent representations have become more common and diverse, thereby incorporating even oni’s characteristics. This argument has rendered the grounds of the debate irrelevant. The dokkaebi has been used recently for tourist marketing purposes, particularly in revitalizing interest in regions considered the cradle of various traditional dokkaebi tales. These campaign strategies include the Jeju-do Dokkaebi Park, Koksung Dokkaebi Land, as well as the Taebaek and Sokri-san Dokkaebi Festivals. Almost dokkaebi characters are identical to the Japanese oni in tourist marketing. However, the pursuit for dokkaebi’s authentic visual representation is less interesting and fruitful than the appreciation of the entire spectrum of dokkaebi images that have been created. Thus, scholars and stakeholders must not exclude the possibilities for a variety of potentials within the visual culture. The same sentiment applies to traditional art and craft. This study aims to contribute to a new visualization of the dokkaebi that embraces the possibilities of both folk craft and art, which continue to be uncovered by diverse and careful researchers in a still-developing field.

Keywords: Dokkaebi, post-colonial period, representation, tourist marketing

Procedia PDF Downloads 255
106 Increment of Panel Flutter Margin Using Adaptive Stiffeners

Authors: S. Raja, K. M. Parammasivam, V. Aghilesh

Abstract:

Fluid-structure interaction is a crucial consideration in the design of many engineering systems such as flight vehicles and bridges. Aircraft lifting surfaces and turbine blades can fail due to oscillations caused by fluid-structure interaction. Hence, it is focussed to study the fluid-structure interaction in the present research. First, the effect of free vibration over the panel is studied. It is well known that the deformation of a panel and flow induced forces affects one another. The selected panel has a span 300mm, chord 300mm and thickness 2 mm. The project is to study, the effect of cross-sectional area and the stiffener location is carried out for the same panel. The stiffener spacing is varied along both the chordwise and span-wise direction. Then for that optimal location the ideal stiffener length is identified. The effect of stiffener cross-section shapes (T, I, Hat, Z) over flutter velocity has been conducted. The flutter velocities of the selected panel with two rectangular stiffeners of cantilever configuration are estimated using MSC NASTRAN software package. As the flow passes over the panel, deformation takes place which further changes the flow structure over it. With increasing velocity, the deformation goes on increasing, but the stiffness of the system tries to dampen the excitation and maintain equilibrium. But beyond a critical velocity, the system damping suddenly becomes ineffective, so it loses its equilibrium. This estimated in NASTRAN using PK method. The first 10 modal frequencies of a simple panel and stiffened panel are estimated numerically and are validated with open literature. A grid independence study is also carried out and the modal frequency values remain the same for element lengths less than 20 mm. The current investigation concludes that the span-wise stiffener placement is more effective than the chord-wise placement. The maximum flutter velocity achieved for chord-wise placement is 204 m/s while for a span-wise arrangement it is augmented to 963 m/s for the stiffeners location of ¼ and ¾ of the chord from the panel edge (50% of chord from either side of the mid-chord line). The flutter velocity is directly proportional to the stiffener cross-sectional area. A significant increment in flutter velocity from 218m/s to 1024m/s is observed for the stiffener lengths varying from 50% to 60% of the span. The maximum flutter velocity above Mach 3 is achieved. It is also observed that for a stiffened panel, the full effect of stiffener can be achieved only when the stiffener end is clamped. Stiffeners with Z cross section incremented the flutter velocity from 142m/s (Panel with no stiffener) to 328 m/s, which is 2.3 times that of simple panel.

Keywords: stiffener placement, stiffener cross-sectional area, stiffener length, stiffener cross sectional area shape

Procedia PDF Downloads 270
105 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 422
104 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 64
103 Sustainable Urbanism: Model for Social Equity through Sustainable Development

Authors: Ruchira Das

Abstract:

The major Metropolises of India are resultant of Colonial manifestation of Production, Consumption and Sustenance. These cities grew, survived, and sustained on the basic whims of Colonial Power and Administrative Agendas. They were symbols of power, authority and administration. Within them some Colonial Towns remained as small towns within the close vicinity of the major metropolises and functioned as self–sufficient units until peripheral development due to tremendous pressure occurred in the metropolises. After independence huge expansion in Judiciary and Administration system resulted City Oriented Employment. A large number of people started residing within the city or within commutable distance of the city and it accelerated expansion of the cities. Since then Budgetary and Planning expenditure brought a new pace in Economic Activities. Investment in Industry and Agriculture sector generated opportunity of employment which further led towards urbanization. After two decades of Budgetary and Planning economic activities in India, a new era started in metropolitan expansion. Four major metropolises started further expansion rapidly towards its suburbs. A concept of large Metropolitan Area developed. Cities became nucleus of suburbs and rural areas. In most of the cases such expansion was not favorable to the relationship between City and its hinterland due to absence of visualization of Compact Sustainable Development. The search for solutions needs to weigh the choices between Rural and Urban based development initiatives. Policymakers need to focus on areas which will give the greatest impact. The impact of development initiatives will spread the significant benefit to all. There is an assumption that development integrates Economic, Social and Environmental considerations with equal weighing. The traditional narrower and almost exclusive focus on economic criteria as the determinant of the level of development is thus re–described and expanded. The Social and Environmental aspects are equally important as Economic aspect to achieve Sustainable Development. The arrangement of opportunities for Public, Semi – Public facilities for its citizen is very much relevant to development. It is responsibility of the administration to provide opportunities for the basic requirement of its inhabitants. Development should be in terms of both Industrial and Agricultural to maintain a balance between city and its hinterland. Thus, policy is to formulate shifting the emphasis away from Economic growth towards Sustainable Human Development. The goal of Policymaker should aim at creating environments in which people’s capabilities can be enhanced by the effective dynamic and adaptable policy. The poverty could not be eradicated simply by increasing income. The improvement of the condition of the people would have to lead to an expansion of basic human capabilities. In this scenario the suburbs/rural areas are considered as environmental burden to the metropolises. A new living has to be encouraged in the suburban or rural. We tend to segregate agriculture from the city and city life, this leads to over consumption, but this urbanism model attempts both these to co–exists and hence create an interesting overlapping of production and consumption network towards sustainable Rurbanism.

Keywords: socio–economic progress, sustainability, social equity, urbanism

Procedia PDF Downloads 285
102 Possibilities and Challenges for District Heating

Authors: Louise Ödlund, Danica Djuric Ilic

Abstract:

From a system perspective, there are several benefits of DH. A possibility to utilize the excess heat from waste incineration and biomass-based combined heat and power (CHP) production (e.g. possibility to utilize the excess heat from electricity production) are two examples. However, in a future sustainable society, the benefits of DH may be less obvious. Due to the climate changes and increased energy efficiency of buildings, the demand for space heating is expected to decrease. Due to the society´s development towards circular economy, a larger amount of the waste will be material recycled, and the possibility for DH production by the energy recovery through waste incineration will be reduced. Furthermore, the benefits of biomass-based CHP production will be less obvious since the marginal electricity production will no longer be linked to high greenhouse gas emissions due to an increased share of renewable electricity capacity in the electricity system. The purpose of the study is (1) to provide an overview of the possible development of other sectors which may influence the DH in the future and (2) to detect new business strategies which would enable for DH to adapt to the future conditions and remain competitive to alternative heat production in the future. A system approach was applied where DH is seen as a part of an integrated system which consists of other sectors as well. The possible future development of other sectors and the possible business strategies for DH producers were searched through a systematic literature review In order to remain competitive to the alternative heat production in the future, DH producers need to develop new business strategies. While the demand for space heating is expected to decrease, the space cooling demand will probably increase due to the climate changes, but also due to the better insulation of buildings in the cases where the home appliances are the heat sources. This opens up a possibility for applying DH-driven absorption cooling, which would increase the annual capacity utilization of the DH plants. The benefits of the DH related to the energy recovery from the waste incineration will exist in the future since there will always be a need to take care of materials and waste that cannot be recycled (e.g. waste containing organic toxins, bacteria, such as diapers and hospital waste). Furthermore, by operating central controlled heat pumps, CHP plants, and heat storage depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid. DH producers can also enable development of local biofuel supply chains and reduce biofuel production costs by integrating biofuel and DH production in local DH systems.

Keywords: district heating, sustainable business strategies, sustainable development, system approach

Procedia PDF Downloads 62
101 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 220
100 Unionisation, Participation and Democracy: Forms of Convergence and Divergence between Union Membership and Civil and Political Activism in European Countries

Authors: Silvia Lucciarini, Antonio Corasaniti

Abstract:

The issue of democracy in capitalist countries has once again become the focus of debate in recent years. A number of socio-economic and political tensions have triggered discussion of this topic from various perspectives and disciplines. Political developments, the rise of both right-wing parties and populism and the constant growth of inequalities in a context of welfare downsizing, have led scholars to question if European capitalist countries are really capable of creating and redistributing resources and look for elements that might make democratic capital in European countries more dense. The aim of the work is to shed light on the trajectories, intensity and convergence or divergence between political and associative participation, on one hand, and organization, on the other, as these constitute two of the main points of connection between the norms, values and actions that bind citizens to the state. Using the European Social Survey database, some studies have sought to analyse degrees of unionization by investigating the relationship between systems of industrial relations and vulnerable groups (in terms of value-oriented practices or political participation). This paper instead aims to investigate the relationship between union participation and civil/political participation, comparing union members and non-members and then distinguishing between employees and self-employed professionals to better understand participatory behaviors among different workers. The first component of the research will employ a multilinear logistic model to examine a sample of 10 countries selected according to a grid that combines the industrial relations models identified by Visser (2006) and the Welfare State systems identified by Esping-Andersen (1990). On the basis of this sample, we propose to compare the choices made by workers and their propensity to join trade unions, together with their level of social and political participation, from 2002 to 2016. In the second component, we aim to verify whether workers within the same system of industrial relations and welfare show a similar propensity to engage in civil participation through political bodies and associations, or if instead these tendencies take on more specific and varied forms. The results will allow us to see: (1) if political participation is higher among unionized workers than it is among the non-unionized. (2) what are the differences in unionisation and civil/political participation between self-employed, temporary and full-time employees and (3) whether the trajectories within industrial relations and welfare models display greater inclusiveness and participation, thereby confirming or disproving the patterns that have been documented among the different European countries.

Keywords: union membership, participation, democracy, industrial relations, welfare systems

Procedia PDF Downloads 114
99 From Indigeneity to Urbanity: A Performative Study of Indian Saang (Folk Play) Tradition

Authors: Shiv Kumar

Abstract:

In the shifting scenario of postmodern age that foregrounds the multiplicity of meanings and discourses, the present research article seeks to investigate various paradigm shift of contemporary performances concerning Haryanvi Saangs, so-called folk plays, which are being performed widely in the regional territory of Haryana, a northern state of India. Folk arts cannot be studied efficiently by using the tools of literary criticism because it differs from the literature in many aspects. One of the most essential differences is that literary works invariably have an author. Folk works, on the contrary, never have an author. The situation is quite clear: either we acknowledge the presence of folk art as a phenomenon in the social and cultural history of people, or we do not acknowledge it and argue it is a poetical or art of fiction. This paper is an effort to understand the performative tradition of Saang which is traditionally known as Saang, Swang or Svang became a popular source for instruction and entertainment in the region and neighbouring states. Scholars and critics have long been debating about the origin of the word swang/svang/saang and their relationship to the Sanskrit word –Sangit, which means singing and music. But in the cultural context of Haryana, the word Saang means ‘to impersonate’ or ‘to imitate’ or ‘to copy someone or something’. The stories they portray are derived for the most part from the same myths, tales, epics and from the lives of Indian religious and folk heroes. Literally, the use of poetic sense, the implication of prose style and elaborate figurative technique are worthwhile to compile the productivity of a performance. All use music and song as an integral part of the performance so that it is also appropriate to call them folk opera. These folk plays are performed strictly by aboriginal people in the state. These people, sometimes denominated as Saangi, possess a culture distinct from the rest of Indian folk performances. The concerned form is also known with various other names like Manch, Khayal, Opera, Nautanki. The group of such folk plays can be seen as a dynamic activity and performed in the open space of the theatre. Nowadays, producers contributed greatly in order to create a rapidly growing musical outlet for budding new style of folk presentation and give rise to the electronic focus genre utilizing many musicians and performers who had to become precursors of the folk tradition in the region. Moreover, the paper proposes to examine available sources relative to this article, and it is believed to draw some different conclusions. For instance, to be a spectator of ongoing performances will contribute to providing enough guidance to move forward on this root. In this connection, the paper focuses critically upon the major performative aspects of Haryanvi Saang in relation to several inquiries such as the study of these plays in the context of Indian literary scenario, gender visualization and their dramatic representation, a song-music tradition in folk creativity and development of Haryanvi dramatic art in the contemporary socio-political background.

Keywords: folk play, indigenous, performance, Saang, tradition

Procedia PDF Downloads 125
98 Bridging Minds, Building Success Beyond Metrics: Uncovering Human Influence on Project Performance: Case Study of University of Salford

Authors: David Oyewumi Oyekunle, David Preston, Florence Ibeh

Abstract:

The paper provides an overview of the impacts of the human dimension in project management and team management on projects, which is increasingly affecting the performance of organizations. Recognizing its crucial significance, the research focuses on analyzing the psychological and interpersonal dynamics within project teams. This research is highly significant in the dynamic field of project management, as it addresses important gaps and offers vital insights that align with the constantly changing demands of the profession. A case study was conducted at the University of Salford to examine how human activity affects project management and performance. The study employed a mixed methodology to gain a deeper understanding of the real-world experiences of the subjects and project teams. Data analysis procedures to address the research objectives included the deductive approach, which involves testing a clear hypothesis or theory, as well as descriptive analysis and visualization. The survey comprised a sample size of 40 participants out of 110 project management professionals, including staff and final students in the Salford Business School, using a purposeful sampling method. To mitigate bias, the study ensured diversity in the sample by including both staff and final students. A smaller sample size allowed for more in-depth analysis and a focused exploration of the research objective. Conflicts, for example, are intricate occurrences shaped by a multitude of psychological stimuli and social interactions and may have either a deterrent perspective or a positive perspective on project performance and project management productivity. The study identified conflict elements, including culture, environment, personality, attitude, individual project knowledge, team relationships, leadership, and team dynamics among team members, as crucial human activities to minimize conflict. The findings are highly significant in the dynamic field of project management, as they address important gaps and offer vital insights that align with the constantly changing demands of the profession. It provided project professionals with valuable insights that can help them create a collaborative and high-performing project environment. Uncovering human influence on project performance, effective communication, optimal team synergy, and a keen understanding of project scope are necessary for the management of projects to attain exceptional performance and efficiency. For the research to achieve the aims of this study, it was acknowledged that the productive dynamics of teams and strong group cohesiveness are crucial for effectively managing conflicts in a beneficial and forward-thinking manner. Addressing the identified human influence will contribute to a more sustainable project management approach and offer opportunities for exploration and potential contributions to both academia and practical project management.

Keywords: human dimension, project management, team dynamics, conflict resolution

Procedia PDF Downloads 56
97 The Instrumentalization of Digital Media in the Context of Sexualized Violence

Authors: Katharina Kargel, Frederic Vobbe

Abstract:

Sexual online grooming is generally defined as digital interactions for the purpose of sexual exploitation of children or minors, i.e. as a process for preparing and framing sexual child abuse. Due to its conceptual history, sexual online grooming is often associated with perpetrators who are previously unknown to those affected. While the strategies of perpetrators and the perception of those affected are increasingly being investigated, the instrumentalisation of digital media has not yet been researched much. Therefore, the present paper aims at contributing to this research gap by examining in what kind of ways perpetrators instrumentalise digital media. Our analyses draw on 46 case documentations and 18 interviews with those affected. The cases and the partly narrative interviews were collected by ten cooperating specialist centers working on sexualized violence in childhood and youth. For this purpose, we designed a documentation grid allowing for a detailed case reconstruction i.e. including information on the violence, digital media use and those affected. By using Reflexive Grounded Theory, our analyses emphasize a) the subjective benchmark of professional practitioners as well as those affected and b) the interpretative implications resulting from our researchers’ subjective and emotional interaction with the data material. It should first be noted that sexualized online grooming can result in both online and offline sexualized violence as well as hybrid forms. Furthermore, the perpetrators either come from the immediate social environment of those affected or are unknown to them. The perpetrator-victim relationship plays a more important role with regard to the question of the instrumentalisation of digital media than the question of the space (on vs. off) in which the primary violence is committed. Perpetrators unknown to those affected instrumentalise digital media primarily to establish a sexualized system of norms, which is usually embedded in a supposed love relationship. In some cases, after an initial exchange of sexualized images or video recordings, a latent play on the position of power takes place. In the course of the grooming process, perpetrators from the immediate social environment increasingly instrumentalise digital media to establish an explicit relationship of power and dependence, which is directly determined by coercion, threats and blackmail. The knowledge of possible vulnerabilities is strategically used in the course of maintaining contact. The above explanations lead to the conclusion that the motive for the crime plays an essential role in the question of the instrumentalisation of digital media. It is therefore not surprising that it is mostly the near-field perpetrators without commercial motives who initiate a spiral of violence and stress by digitally distributing sexualized (violent) images and video recordings within the reference system of those affected.

Keywords: sexualized violence, children and youth, grooming, offender strategies, digital media

Procedia PDF Downloads 165
96 BiVO₄‑Decorated Graphite Felt as Highly Efficient Negative Electrode for All-Vanadium Redox Flow Batteries

Authors: Daniel Manaye Kabtamu, Anteneh Wodaje Bayeh

Abstract:

With the development and utilization of new energy technology, people’s demand for large-scale energy storage system has become increasingly urgent. Vanadium redox flow battery (VRFB) is one of the most promising technologies for grid-scale energy storage applications because of numerous attractive features, such as long cycle life, high safety, and flexible design. However, the relatively low energy efficiency and high production cost of the VRFB still limit its practical implementations. It is of great attention to enhance its energy efficiency and reduce its cost. One of the main components of VRFB that can impressively impact the efficiency and final cost is the electrode materials, which provide the reactions sites for redox couples (V₂₊/V³⁺ and VO²⁺/VO₂⁺). Graphite felt (GF) is a typical carbon-based material commonly employed as electrode for VRFB due to low-cost, good chemical and mechanical stability. However, pristine GF exhibits insufficient wettability, low specific surface area, and poor kinetics reversibility, leading to low energy efficiency of the battery. Therefore, it is crucial to further modify the GF electrode to improve its electrochemical performance towards VRFB by employing active electrocatalysts, such as less expensive metal oxides. This study successfully fabricates low-cost plate-like bismuth vanadate (BiVO₄) material through a simple one-step hydrothermal route, employed as an electrocatalyst to adorn the GF for use as the negative electrode in VRFB. The experimental results show that BiVO₄-3h exhibits the optimal electrocatalytic activity and reversibility for the vanadium redox couples among all samples. The energy efficiency of the VRFB cell assembled with BiVO₄-decorated GF as the negative electrode is found to be 75.42% at 100 mA cm−2, which is about 10.24% more efficient than that of the cell assembled with heat-treated graphite felt (HT-GF) electrode. The possible reasons for the activity enhancement can be ascribed to the existence of oxygen vacancies in the BiVO₄ lattice structure and the relatively high surface area of BiVO₄, which provide more active sites for facilitating the vanadium redox reactions. Furthermore, the BiVO₄-GF electrode obstructs the competitive irreversible hydrogen evolution reaction on the negative side of the cell, and it also has better wettability. Impressively, BiVO₄-GF as the negative electrode shows good stability over 100 cycles. Thus, BiVO₄-GF is a promising negative electrode candidate for practical VRFB applications.

Keywords: BiVO₄ electrocatalyst, electrochemical energy storage, graphite felt, vanadium redox flow battery

Procedia PDF Downloads 1544
95 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding

Authors: Amir E. Amirzadeh, Richard K. Strand

Abstract:

Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.

Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making

Procedia PDF Downloads 44
94 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 93
93 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 70
92 Synchrotron Based Techniques for the Characterization of Chemical Vapour Deposition Overgrowth Diamond Layers on High Pressure, High Temperature Substrates

Authors: T. N. Tran Thi, J. Morse, C. Detlefs, P. K. Cook, C. Yıldırım, A. C. Jakobsen, T. Zhou, J. Hartwig, V. Zurbig, D. Caliste, B. Fernandez, D. Eon, O. Loto, M. L. Hicks, A. Pakpour-Tabrizi, J. Baruchel

Abstract:

The ability to grow boron-doped diamond epilayers of high crystalline quality is a prerequisite for the fabrication of diamond power electronic devices, in particular high voltage diodes and metal-oxide-semiconductor (MOS) transistors. Boron and intrinsic diamond layers are homoepitaxially overgrown by microwave assisted chemical vapour deposition (MWCVD) on single crystal high pressure, high temperature (HPHT) grown bulk diamond substrates. Various epilayer thicknesses were grown, with dopant concentrations ranging from 1021 atom/cm³ at nanometer thickness in the case of 'delta doping', up 1016 atom/cm³ and 50µm thickness or high electric field drift regions. The crystalline quality of these overgrown layers as regards defects, strain, distortion… is critical for the device performance through its relation to the final electrical properties (Hall mobility, breakdown voltage...). In addition to the optimization of the epilayer growth conditions in the MWCVD reactor, other important questions related to the crystalline quality of the overgrown layer(s) are: 1) what is the dependence on the bulk quality and surface preparation methods of the HPHT diamond substrate? 2) how do defects already present in the substrate crystal propagate into the overgrown layer; 3) what types of new defects are created during overgrowth, what are their growth mechanisms, and how can these defects be avoided? 4) how can we relate in a quantitative manner parameters related to the measured crystalline quality of the boron doped layer to the electronic properties of final processed devices? We describe synchrotron-based techniques developed to address these questions. These techniques allow the visualization of local defects and crystal distortion which complements the data obtained by other well-established analysis methods such as AFM, SIMS, Hall conductivity…. We have used Grazing Incidence X-ray Diffraction (GIXRD) at the ID01 beamline of the ESRF to study lattice parameters and damage (strain, tilt and mosaic spread) both in diamond substrate near surface layers and in thick (10–50 µm) overgrown boron doped diamond epi-layers. Micro- and nano-section topography have been carried out at both the BM05 and ID06-ESRF) beamlines using rocking curve imaging techniques to study defects which have propagated from the substrate into the overgrown layer(s) and their influence on final electronic device performance. These studies were performed using various commercially sourced HPHT grown diamond substrates, with the MWCVD overgrowth carried out at the Fraunhofer IAF-Germany. The synchrotron results are in good agreement with low-temperature (5°K) cathodoluminescence spectroscopy carried out on the grown samples using an Inspect F5O FESEM fitted with an IHR spectrometer.

Keywords: synchrotron X-ray diffaction, crystalline quality, defects, diamond overgrowth, rocking curve imaging

Procedia PDF Downloads 236
91 qPCR Method for Detection of Halal Food Adulteration

Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik

Abstract:

Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).

Keywords: food fraud, halal food, pork, qPCR

Procedia PDF Downloads 226
90 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 29
89 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 363
88 Exposing The Invisible

Authors: Kimberley Adamek

Abstract:

According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.

Keywords: wind engineering, climate, visualization, architectural aerodynamics

Procedia PDF Downloads 340
87 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico

Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón

Abstract:

The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.

Keywords: interaction, political communication, social network analysis, Twitter

Procedia PDF Downloads 201
86 Gender Policies and Political Culture: An Examination of the Canadian Context

Authors: Chantal Maille

Abstract:

This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.

Keywords: Canada, gender-based analysis, gender policies, political culture

Procedia PDF Downloads 205
85 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 57
84 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 93
83 Towards the Development of Uncertainties Resilient Business Model for Driving the Solar Panel Industry in Nigeria Power Sector

Authors: Balarabe Z. Ahmad, Anne-Lorène Vernay

Abstract:

The emergence of electricity in Nigeria was dated back to 1896. The power plants have the potential to generate 12,522 MW of electric power. Whereas current dispatch is about 4,000 MW, access to electrification is about 60%, with consumption at 0.14 MWh/capita. The government embarked on energy reforms to mitigate energy poverty. The reform targeted the provision of electricity access to 75% of the population by 2020 and 90% by 2030. Growth of total electricity demand by a factor of 5 by 2035 had been projected. This means that Nigeria will require almost 530 TWh of electricity which can be delivered through generators with a capacity of 65 GW. Analogously, the geographical location of Nigeria has placed it in an advantageous position as the source of solar energy; the availability of a high sunshine belt is obvious in the country. The implication is that the far North, where energy poverty is high, equally has about twice the solar radiation as against southern Nigeria. Hence, the chance of generating solar electricity is 66% possible at 11850 x 103 GWh per year, which is one hundred times the current electricity consumption rate in the country. Harvesting these huge potentials may be a mirage if the entrepreneurs in the solar panel business are left with the conventional business models that are not uncertainty resilient. Currently, business entities in RE in Nigeria are uncertain of; accessing the national grid, purchasing potentials of cooperating organizations, currency fluctuation and interest rate increases. Uncertainties such as the security of projects and government policy are issues entrepreneurs must navigate to remain sustainable in the solar panel industry in Nigeria. The aim of this paper is to identify how entrepreneurial firms consider uncertainties in developing workable business models for commercializing solar energy projects in Nigeria. In an attempt to develop a novel business model, the paper investigated how entrepreneurial firms assess and navigate uncertainties. The roles of key stakeholders in helping entrepreneurs to manage uncertainties in the Nigeria RE sector were probed in the ongoing study. The study explored empirical uncertainties that are peculiar to RE entrepreneurs in Nigeria. A mixed-mode of research was embraced using qualitative data from face-to-face interviews conducted on the Solar Energy Entrepreneurs and the experts drawn from key stakeholders. Content analysis of the interview was done using Atlas. It is a nine qualitative tool. The result suggested that all stakeholders are required to synergize in developing an uncertainty resilient business model. It was opined that the RE entrepreneurs need modifications in the business recommendations encapsulated in the energy policy in Nigeria to strengthen their capability in delivering solar energy solutions to the yawning Nigerians.

Keywords: uncertainties, entrepreneurial, business model, solar-panel

Procedia PDF Downloads 124
82 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 196
81 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 107
80 SkyCar Rapid Transit System: An Integrated Approach of Modern Transportation Solutions in the New Queen Elizabeth Quay, Perth, Western Australia

Authors: Arfanara Najnin, Michael W. Roach, Jr., Dr. Jianhong Cecilia Xia

Abstract:

The SkyCar Rapid Transit System (SRT) is an innovative intelligent transport system for the sustainable urban transport system. This system will increase the urban area network connectivity and decrease urban area traffic congestion. The SRT system is designed as a suspended Personal Rapid Transit (PRT) system that travels under a guideway 5m above the ground. A driver-less passenger is via pod-cars that hang from slender beams supported by columns that replace existing lamp posts. The beams are setup in a series of interconnecting loops providing non-stop travel from beginning to end to assure journey time. The SRT forward movement is effected by magnetic motors built into the guideway. Passenger stops are at either at line level 5m above the ground or ground level via a spur guideway that curves off the main thoroughfare. The main objective of this paper is to propose an integrated Automated Transit Network (ATN) technology for the future intelligent transport system in the urban built environment. To fulfil the objective a 4D simulated model in the urban built environment has been proposed by using the concept of SRT-ATN system. The methodology for the design, construction and testing parameters of a Technology Demonstrator (TD) for proof of concept and a Simulator (S) has been demonstrated. The completed TD and S will provide an excellent proving ground for the next development stage, the SRT Prototype (PT) and Pilot System (PS). This paper covered by a 4D simulated model in the virtual built environment is to effectively show how the SRT-ATN system works. OpenSim software has been used to develop the model in a virtual environment, and the scenario has been simulated to understand and visualize the proposed SkyCar Rapid Transit Network model. The SkyCar system will be fabricated in a modular form which is easily transported. The system would be installed in increasingly congested city centers throughout the world, as well as in airports, tourist resorts, race tracks and other special purpose for the urban community. This paper shares the lessons learnt from the proposed innovation and provides recommendations on how to improve the future transport system in urban built environment. Safety and security of passengers are prime factors to be considered for this transit system. Design requirements to meet the safety needs to be part of the research and development phase of the project. Operational safety aspects would also be developed during this period. The vehicles, the track and beam systems and stations are the main components that need to be examined in detail for safety and security of patrons. Measures will also be required to protect columns adjoining intersections from errant vehicles in vehicular traffic collisions. The SkyCar Rapid Transit takes advantage of all current disruptive technologies; batteries, sensors and 4G/5G communication and solar energy technologies which will continue to reduce the costs and make the systems more profitable. SkyCar's energy consumption is extremely low compared to other transport systems.

Keywords: SkyCar, rapid transit, Intelligent Transport System (ITS), Automated Transit Network (ATN), urban built environment, 4D Visualization, smart city

Procedia PDF Downloads 194
79 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows

Authors: C. D. Ellis, H. Xia, X. Chen

Abstract:

Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.

Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics

Procedia PDF Downloads 204
78 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study

Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno

Abstract:

The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.

Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade

Procedia PDF Downloads 187