Search results for: stefan problem
5739 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 765738 Legal Study on the Construction of Olympic and Paralympic Soft Law about Manipulation of Sports Competition
Authors: Clemence Collon, Didier Poracchia
Abstract:
The manipulation of sports competitions is a new type of sports integrity problem. While doping has become an organized, institutionalized struggle, the manipulation of sports competitions is gradually building up. This study aims to describe and understand how the soft Olympic and Paralympic law was gradually built. It also summarizes the legal tools for prevention, detection, and sanction developed by the international Olympic movement. Then, it analyzes the impact of this soft law on the law of the States, in particular in French law. This study is mainly based on an analysis of existing legal literature and non-binding law in the International Olympic and Paralympic movement and on the French National Olympic Committee. Interviews were carried out with experts from the Olympic movement or experts working on combating the manipulation of sports competitions; the answers are also used in this article. The International Olympic Committee has created a supranational legal base to fight against the manipulation of sports competitions. This legal basis must be respected by sports organizations. The Olympic Charter, the Olympic Code of Ethics, the Olympic Movement Code on the prevention of the manipulation of sports competitions, the rules of standards, the basic universal principles, the manuals, the declarations have been published in this perspective. This sports soft law has influences or repercussions in each state. Many states take this new form of integrity problem into account by creating state laws or measures in favor of the fight against sports manipulations. France has so far only a legal basis for manipulation related to betting on sports competitions through the infraction of sports corruption included in the penal code and also created a national platform with various actors to combat this cheating. This legal study highlights the progressive construction of the sports law rules of the Olympic movement in the fight against the manipulation of sports competitions linked to sports betting and their impact on the law of the states.Keywords: integrity, law and ethics, manipulation of sports competitions, olympic, sports law
Procedia PDF Downloads 1545737 Numerical Modeling of Film Cooling of the Surface at Non-Uniform Heat Flux Distributions on the Wall
Authors: M. V. Bartashevich
Abstract:
The problem of heat transfer at thin laminar liquid film is solved numerically. A thin film of liquid flows down an inclined surface under conditions of variable heat flux on the wall. The use of thin films of liquid allows to create the effective technologies for cooling surfaces. However, it is important to investigate the most suitable cooling regimes from a safety point of view, in order, for example, to avoid overheating caused by the ruptures of the liquid film, and also to study the most effective cooling regimes depending on the character of the distribution of the heat flux on the wall, as well as the character of the blowing of the film surface, i.e., the external shear stress on its surface. In the statement of the problem on the film surface, the heat transfer coefficient between the liquid and gas is set, as well as a variable external shear stress - the intensity of blowing. It is shown that the combination of these factors - the degree of uniformity of the distribution of heat flux on the wall and the intensity of blowing, affects the efficiency of heat transfer. In this case, with an increase in the intensity of blowing, the cooling efficiency increases, reaching a maximum, and then decreases. It is also shown that the more uniform the heating of the wall, the more efficient the heat sink. A separate study was made for the flow regime along the horizontal surface when the liquid film moves solely due to external stress influence. For this mode, the analytical solution is used for the temperature at the entrance region for further numerical calculations downstream. Also the influence of the degree of uniformity of the heat flux distribution on the wall and the intensity of blowing of the film surface on the heat transfer efficiency was also studied. This work was carried out at the Kutateladze Institute of Thermophysics SB RAS (Russia) and supported by FASO Russia.Keywords: Heat Flux, Heat Transfer Enhancement, External Blowing, Thin Liquid Film
Procedia PDF Downloads 1495736 Topology Optimization of Heat and Mass Transfer for Two Fluids under Steady State Laminar Regime: Application on Heat Exchangers
Authors: Rony Tawk, Boutros Ghannam, Maroun Nemer
Abstract:
Topology optimization technique presents a potential tool for the design and optimization of structures involved in mass and heat transfer. The method starts with an initial intermediate domain and should be able to progressively distribute the solid and the two fluids exchanging heat. The multi-objective function of the problem takes into account minimization of total pressure loss and maximization of heat transfer between solid and fluid subdomains. Existing methods account for the presence of only one fluid, while the actual work extends optimization distribution of solid and two different fluids. This requires to separate the channels of both fluids and to ensure a minimum solid thickness between them. This is done by adding a third objective function to the multi-objective optimization problem. This article uses density approach where each cell holds two local design parameters ranging from 0 to 1, where the combination of their extremums defines the presence of solid, cold fluid or hot fluid in this cell. Finite volume method is used for direct solver coupled with a discrete adjoint approach for sensitivity analysis and method of moving asymptotes for numerical optimization. Several examples are presented to show the ability of the method to find a trade-off between minimization of power dissipation and maximization of heat transfer while ensuring the separation and continuity of the channel of each fluid without crossing or mixing the fluids. The main conclusion is the possibility to find an optimal bi-fluid domain using topology optimization, defining a fluid to fluid heat exchanger device.Keywords: topology optimization, density approach, bi-fluid domain, laminar steady state regime, fluid-to-fluid heat exchanger
Procedia PDF Downloads 3995735 Quoting Jobshops Due Dates Subject to Exogenous Factors in Developing Nations
Authors: Idris M. Olatunde, Kareem B.
Abstract:
In manufacturing systems, especially job shops, service performance is a key factor that determines customer satisfaction. Service performance depends not only on the quality of the output but on the delivery lead times as well. Besides product quality enhancement, delivery lead time must be minimized for optimal patronage. Quoting accurate due dates is sine quo non for job shop operational survival in a global competitive environment. Quoting accurate due dates in job shops has been a herculean task that nearly defiled solutions from many methods employed due to complex jobs routing nature of the system. This class of NP-hard problems possessed no rigid algorithms that can give an optimal solution. Jobshop operational problem is more complex in developing nations due to some peculiar factors. Operational complexity in job shops emanated from political instability, poor economy, technological know-how, and the non-promising socio-political environment. The mentioned exogenous factors were hardly considered in the previous studies on scheduling problem related to due date determination in job shops. This study has filled the gap created in the past studies by developing a dynamic model that incorporated the exogenous factors for accurate determination of due dates for varying jobs complexity. Real data from six job shops selected from the different part of Nigeria, were used to test the efficacy of the model, and the outcomes were analyzed statistically. The results of the analyzes showed that the model is more promising in determining accurate due dates than the traditional models deployed by many job shops in terms of patronage and lead times minimization.Keywords: due dates prediction, improved performance, customer satisfaction, dynamic model, exogenous factors, job shops
Procedia PDF Downloads 4125734 The Work and Life Ethics at the Beginning of the 21st Century and the Vulnerability of Long-Term Unemployed over 45 Years Old in Spain since the Economic Crisis of 2008
Authors: Maria Del Mar Maira Vidal, Alvaro Briales
Abstract:
In this paper, we will conduct an analysis of the results of the I+D+i research project “New types of socio-existential vulnerability, support and care in Spain” (VULSOCU) (2016-20). This project had the objective to analyze the new types of vulnerability that are the result of the combination of several factors as the economic crisis, the unemployment, the transformations of the Welfare State, the individualization, etc. We have, therefore, analyzed the way that Spanish long-term unemployed over 45 years experience vulnerability and its consequences on their lives. We have focused on long-term unemployed over 45 that had previously developed stable career paths and have been looking for a job for two years or more. In order to carry out this analysis, we will try to break the dichotomy between the social and the individual, between the socio-historical and the subjectivity, to overcome some of the limits of the research on unemployment. The fieldwork consisted of more than ten focus groups and fifty in-depth interviews. The work and life ethics completely changed at the turn of the nineteenth and twentieth centuries. In the nineteenth century, companies had trouble maintaining their staff, but in the 21st century, unemployed workers feel that they are useless people. Workers value themselves if they have a job. This unveils that labor is a comprehensive social relationship in capitalist societies. In general, unemployed workers are not able to analyze their unemployment as a social problem. They analyze their unemployment as an individual problem. They blame themselves for their unemployment; instead of taking into account that there are millions of unemployed, they talk about themselves as if they were on their own. And the problems caused by unemployment are explained as psychological problems and are medicalized. Anyway, it is important to highlight that this is the result of an ideology and a social relationship that is part of our historical time.Keywords: life ethics, work ethics, unemployment, unemployed over 45 years old
Procedia PDF Downloads 1455733 Optimal Seismic Design of Reinforced Concrete Shear Wall-Frame Structure
Authors: H. Nikzad, S. Yoshitomi
Abstract:
In this paper, the optimal seismic design of reinforced concrete shear wall-frame building structures was done using structural optimization. The optimal section sizes were generated through structural optimization based on linear static analysis conforming to American Concrete Institute building design code (ACI 318-14). An analytical procedure was followed to validate the accuracy of the proposed method by comparing stresses on structural members through output files of MATLAB and ETABS. In order to consider the difference of stresses in structural elements by ETABS and MATLAB, and to avoid over-stress members by ETABS, a stress constraint ratio of MATLAB to ETABS was modified and introduced for the most critical load combinations and structural members. Moreover, seismic design of the structure was done following the International Building Code (IBC 2012), American Concrete Institute Building Code (ACI 318-14) and American Society of Civil Engineering (ASCE 7-10) standards. Typical reinforcement requirements for the structural wall, beam and column were discussed and presented using ETABS structural analysis software. The placement and detailing of reinforcement of structural members were also explained and discussed. The outcomes of this study show that the modification of section sizes play a vital role in finding an optimal combination of practical section sizes. In contrast, the optimization problem with size constraints has a higher cost than that of without size constraints. Moreover, the comparison of optimization problem with that of ETABS program shown to be satisfactory and governed ACI 318-14 building design code criteria.Keywords: structural optimization, seismic design, linear static analysis, etabs, matlab, rc shear wall-frame structures
Procedia PDF Downloads 1735732 Assessing Undergraduate Students' Awareness and Utilization of University Mental Health Services and Programs for Depression: A Case Study
Authors: Calvin Odhiambo
Abstract:
Depression among young adults is a common health problem and a growing public health concern. Of the young adult population, college students are particularly vulnerable to depression as they find themselves grappling with the stress and anxiety of college life while at the same navigating the demands of separation and independence from familial ties. To deal with the resultant mental health challenges affecting this population, most colleges offer counseling services to their student population. What is not known, however, is the extent to which students are aware of or even utilize such mental health services. Our study set out to assess the level of student awareness and utilization of counseling services and programs at a southeastern public university in the United States. Data were collected through self-administered questionnaires given to a convenience sample of 508 undergraduate students voluntarily recruited from 38 classes representing five colleges. Data analysis was done using the Statistical Package of Social Sciences (SPSS) version 25. Results showed that even though a majority of students were aware of the mental health services offered by the university, an overwhelming majority of these students did not utilize any of these services or participate in any mental health programs offered by the university. Significant gender and racial differences were observed. Reasons for the lack of awareness and utilization of mental health services are explored. Recommendations are made on how to increase student awareness and utilization of mental health services, and the implications of the findings are discussed. The findings of this study help to fill an academic lacuna on this issue and provides an important basis for developing policies to help mitigate the growing problem of depression and attendant mental health problems among undergraduate students.Keywords: depression, counseling services, undergraduate college students, utilization of mental health services, perceptions and awareness
Procedia PDF Downloads 875731 Slums in Casablanca: A Conceptive Approach for Better Implementation of VSB Program, Case Study: ER-Hamna Slum
Authors: Sakina Boufarsi, Mehmet Emre Aysu, Behiye Isik Aksulu
Abstract:
Morocco appears to be on its way to eradicating all of the country's slums by assuring the resettlement and improvement of all affected households' living circumstances through the VSB “Villes sans Bidonvilles” program established in 2004 to eradicate the slums in Morocco. Although many attempts have been made to curb their growth none have proven to be a permanent accomplishment. In Morocco, resettlement projects through satellite towns are perceived as the answer to the problem of the slums. However, the new satellite towns are the good intention of the program VSB, but they are environmentally unsustainable, socially isolated and culturally inappropriate, such conditions imposed continuous readjustments of the slum upgrading program. Although slum research is ongoing, they primarily concentrated on two constructs: exploring socio-economic and policy problems and analyzing physical characteristics. Considering that the two constructs mentioned are crucial, this study will demonstrate that a more systematic approach is needed to eradicate them efficiently. The slums issues in Casablanca are a solution that the poor devise for themselves due to government bureaucracy and failing housing policies, they reflect governments' incapacity to respond to urban development’s requiring decent housing for the vulnerable population. This issue will be addressed by exploring the previous strategies and analyzing in detail the strengths and shortcomings of the recent VSB Program. In addition to a comprehensive overview of the slums' situations by combining the social and physical characteristics through Erhamna case study in Sidi Moumen district for a deeper understanding, and therefore to direct improved and valuable recommendations to address the slum problem at all levels.Keywords: Casablanca slums, resettlement projects, eradication of slums, satellite town, VSB program
Procedia PDF Downloads 1755730 Nursing System Development in Patients Undergoing Operation in 3C Ward: Early Ambulation in Patients with Head and Neck Cancer
Authors: Artitaya Sabangbal, Darawan Augsornwan, Palakorn Surakunprapha, Lalida Petphai
Abstract:
Background: Srinagarind Hospital Ward 3C has about 180 cases of patients with head and neck cancer per year. Almost all of these patients suffer with pain, fatigue, low self image, swallowing problem and when the tumor is larger they will have breathing problem. Many of them have complication after operation such as pressure sore, pneumonia, deep vein thrombosis. Nursing activity is very important to prevent the complication especially promoting patients early ambulation. The objective of this study was to develop early ambulation protocol for patients with head and neck cancer undergoing operation. Method: this study is one part of nursing system development in patients undergoing operation in Ward 3C. It is a participation action research divided into 3 phases Phase 1 Situation review: In this phase we review the clinical outcomes, process of care, from document such as nurses note and interview nurses, patients and family about early ambulation. Phase 2 Searching nursing intervention about early ambulation from previous study then establish protocol . This phase we have picture package of early ambulation. Phase 3 implementation and evaluation. Result: Patients with head and neck cancer after operation can follow early ambulation protocol 100%, 85 % of patients can follow protocol within 2 days after operation and 100% can follow protocol within 3 days. No complications occur. Patients satisfaction in very good level is 58% and in good level is 42% Length of hospital stay is 6 days in patients with wide excision and 16 day in patients with flap coverage. Conclusion: The early ambulation protocol is appropriate for patients with head and neck cancer who undergo operation. This can restore physical health, reduce complication and increase patients satisfaction.Keywords: nursing system, early ambulation, head and neck cancer, operation
Procedia PDF Downloads 2295729 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4215728 Influence of Convective Boundary Condition on Chemically Reacting Micropolar Fluid Flow over a Truncated Cone Embedded in Porous Medium
Authors: Pradeepa Teegala, Ramreddy Chitteti
Abstract:
This article analyzes the mixed convection flow of chemically reacting micropolar fluid over a truncated cone embedded in non-Darcy porous medium with convective boundary condition. In addition, heat generation/absorption and Joule heating effects are taken into consideration. The similarity solution does not exist for this complex fluid flow problem, and hence non-similarity transformations are used to convert the governing fluid flow equations along with related boundary conditions into a set of nondimensional partial differential equations. Many authors have been applied the spectral quasi-linearization method to solve the ordinary differential equations, but here the resulting nonlinear partial differential equations are solved for non-similarity solution by using a recently developed method called the spectral quasi-linearization method (SQLM). Comparison with previously published work on special cases of the problem is performed and found to be in excellent agreement. The effect of pertinent parameters namely, Biot number, mixed convection parameter, heat generation/absorption, Joule heating, Forchheimer number, chemical reaction, micropolar and magnetic field on physical quantities of the flow are displayed through graphs and the salient features are explored in detail. Further, the results are analyzed by comparing with two special cases, namely, vertical plate and full cone wherever possible.Keywords: chemical reaction, convective boundary condition, joule heating, micropolar fluid, mixed convection, spectral quasi-linearization method
Procedia PDF Downloads 2775727 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada
Authors: Amy Buckley
Abstract:
The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism
Procedia PDF Downloads 765726 Evolutionary Methods in Cryptography
Authors: Wafa Slaibi Alsharafat
Abstract:
Genetic algorithms (GA) are random algorithms as random numbers that are generated during the operation of the algorithm determine what happens. This means that if GA is applied twice to optimize exactly the same problem it might produces two different answers. In this project, we propose an evolutionary algorithm and Genetic Algorithm (GA) to be implemented in symmetric encryption and decryption. Here, user's message and user secret information (key) which represent plain text to be transferred into cipher text.Keywords: GA, encryption, decryption, crossover
Procedia PDF Downloads 4465725 The Problem of Child Exploitation on Twitter: A Socio-Anthropological Perspective on Content Filtering Gaps
Authors: Samig Ibayev
Abstract:
This research addresses the problem of illegal child abuse content on the Twitter platform bypassing filtering systems and appearing before users from a social-anthropological perspective. Although the wide access opportunities provided by social media platforms to their users are beneficial in many ways, it is seen that they contain gaps that pave the way for the spread of harmful and illegal content. The aim of the study is to examine the inadequacies of the current content filtering mechanisms of the Twitter platform, to understand the psychological effects of young users unintentionally encountering such content and the social dimensions of this situation. The research was conducted with a qualitative approach and was conducted using digital ethnography, content analysis and user experiences on the Twitter platform. Digital ethnography was used to observe the frequency of child abuse content on the platform and how these contents were presented. The content analysis method was used to reveal the gaps in Twitter's current filtering mechanisms. In addition, detailed information was collected on the extent of psychological effects and how the perception of trust in social media changed through interviews with young users exposed to such content. The main contributions of the research are to highlight the weaknesses in the content moderation and filtering mechanisms of social media platforms, to reveal the negative effects of illegal content on users, and to offer suggestions for preventing the spread of such content. As a result, it is suggested that platforms such as Twitter should improve their content filtering policies in order to increase user security and fulfill their social responsibilities. This research aims to create significant awareness about social media content management and ethical responsibilities on digital platforms.Keywords: Twitter, child exploitation, content filtering, digital ethnography, social anthropology
Procedia PDF Downloads 125724 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 1095723 The Nexus between Migration and Human Security: The Case of Ethiopian Female Migration to Sudan
Authors: Anwar Hassen Tsega
Abstract:
International labor migration is an integral part of the modern globalized world. However, the phenomenon has its roots in some earlier periods in human history. This paper discusses the relatively new phenomenon of female migration in Africa. In the past, African women migrants were only spouses or dependent family members. But as modernity swept most African societies, with rising unemployment rates, there is evidence everywhere in Africa that women labor migration is a growing phenomenon that deserves to be understood in the context of human security research. This work explores these issues further, focusing on the experience of Ethiopian women labor migrants to Sudan. The migration of Ethiopian people to Sudan is historical; nevertheless, labor migration mainly started since the discovery and subsequent exploration of oil in the Sudan. While the paper is concerned with the human security aspect of the migrant workers, we need to be certain that the migration process will provide with a decent wage, good working conditions, the necessary social security coverage, and labor protection as a whole. However, migration to Sudan is not always safe and female migrants become subject to violence at the hands of brokers, employers and migration officials. For this matter, the paper argued that identifying the vulnerable stages and major problem facing female migrant workers at various stages of migration is a prerequisite to combat the problem and secure the lives of the migrant workers. The major problems female migrants face include extra degrees of gender-based violence, underpayment, various forms of abuse like verbal, physical and sexual and other forms of torture which include beating and slaps. This peculiar situation could be attributed to the fact that most of these women are irregular migrants and fall under the category of unskilled and/or illiterate migrants.Keywords: Ethiopia, human security, labor migration, Sudan
Procedia PDF Downloads 2515722 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics
Procedia PDF Downloads 1245721 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 1815720 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error
Authors: Seyedamir Makinejadsanij
Abstract:
One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem
Procedia PDF Downloads 905719 Hypersonic Flow of CO2-N2 Mixture around a Spacecraft during the Atmospheric Reentry
Authors: Zineddine Bouyahiaoui, Rabah Haoui
Abstract:
The aim of this work is to analyze a flow around the axisymmetric blunt body taken into account the chemical and vibrational nonequilibrium flow. This work concerns the entry of spacecraft in the atmosphere of the planet Mars. Since the equations involved are non-linear partial derivatives, the volume method is the only way to solve this problem. The choice of the mesh and the CFL is a condition for the convergence to have the stationary solution.Keywords: blunt body, finite volume, hypersonic flow, viscous flow
Procedia PDF Downloads 2345718 Genetic Algorithm to Construct and Enumerate 4×4 Pan-Magic Squares
Authors: Younis R. Elhaddad, Mohamed A. Alshaari
Abstract:
Since 2700 B.C the problem of constructing magic squares attracts many researchers. Magic squares one of most difficult challenges for mathematicians. In this work, we describe how to construct and enumerate Pan- magic squares using genetic algorithm, using new chromosome encoding technique. The results were promising within reasonable time.Keywords: genetic algorithm, magic square, pan-magic square, computational intelligence
Procedia PDF Downloads 5765717 Design Thinking and Requirements Engineering in Application Development: Case Studies in Brazil
Authors: V. Prodocimo, A. Malucelli, S. Reinehr
Abstract:
Organizations, driven by business digitization, have in software the main core of value generation and the main channel of communication with their clients. The software, as well as responding to momentary market needs, spans an extensive product family, ranging from mobile applications to multilateral platforms. Thus, the software specification needs to represent solutions focused on consumer problems and market needs. However, requirements engineering, whose approach is strongly linked to technology, becomes deficient and ineffective when the problem is not well defined or when looking for an innovative solution, thus needing a complementary approach. Research has cited the combination of design thinking and requirements engineering, many correlating design thinking as a support technique for the elicitation step, however, little is known about the real benefits and challenges that this combination can bring. From the point of view of the development process, there is little empirical evidence of how Design Thinking interactions with requirements engineering occur. Given this scenario, this paper aims to understand how design thinking practices are applied in each of the requirements engineering stages in software projects. To elucidate these interactions, a qualitative and exploratory research was carried out through the application of the case study method in IT organizations in Brazil that work in the development of software projects. The results indicate that design thinking has aided requirements engineering, both in projects that adopt agile methods and those that adopt the waterfall process, bringing a complementary thought that seeks to build the best software solution design for business problems. It was also possible to conclude that organizations choose to use design thinking not based on a specific software family (e.g. mobile or desktop applications), but given the characteristics of the software projects, such as: vague nature of the problem, complex problems and/or need for innovative solutions.Keywords: software engineering, requirements engineering, design thinking, innovative solutions
Procedia PDF Downloads 1255716 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 965715 Design of an Automatic Bovine Feeding Machine
Authors: Huseyin A. Yavasoglu, Yusuf Ziya Tengiz, Ali Göksenli
Abstract:
In this study, an automatic feeding machine for different type and class of bovine animals is designed. Daily nutrition of a bovine consists of grass, corn, straw, silage, oat, wheat and different vitamins and minerals. The amount and mixture amount of each of the nutrition depends on different parameters of the bovine. These parameters are; age, sex, weight and maternity of the bovine, also outside temperature. The problem in a farm is to constitute the correct mixture and amount of nutrition for each animal. Faulty nutrition will cause an insufficient feeding of the animal concluding in an unhealthy bovine. To solve this problem, a new automatic feeding machine is designed. Travelling of the machine is performed by four tires, which is pulled by a tractor. The carrier consists of eight bins, which each of them carries a nutrition type. Capacity of each unit is 250 kg. At the bottom of each chamber is a sensor measuring the weight of the food inside. A funnel is at the bottom of each chamber by which open/close function is controlled by a valve. Each animal will carry a RFID tag including ID on its ear. A receiver on the feeding machine will read this ID and by given previous information by the operator (veterinarian), the system will detect the amount of each nutrition unit which will be given to the selected animal for feeding. In the system, each bin will open its exit gate by the help of the valve under the control of PLC (Programmable Logic Controller). The amount of each nutrition type will be controlled by measuring the open/close time. The exit canals of the bins are collected in a reservoir. To achieve a homogenous nitration, the collected feed will be mixed by a worm gear. Further the mixture will be transported by a help of a funnel to the feeding unit of the animal. The feeding process can be performed in 100 seconds. After feeding of the animal, the tractor pulls the travelling machine to the next animal. By the help of this system animals can be feeded by right amount and mixture of nutritionKeywords: bovine, feeding, nutrition, transportation, automatic
Procedia PDF Downloads 3425714 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model
Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech
Abstract:
Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM
Procedia PDF Downloads 1365713 Role of Family in Child Behavior Problems: A General Overview of Dissertations and Thesis at Turkey
Authors: Selen Demirtas Zorbaz, Ozlem Ulas
Abstract:
Examining the reasons of child behaviour problems has been one of the focus of psychology and related disciplines for so long. It can be said there is a lot of reasons of child behaviour problems and familial factors might be the leading ones. When taking into account the prevalence of the children having behaviour problems in Turkey, it can be said that it is important to carry out studies putting forward the reasons of behaviour problems. From this point of view, the aim of this study is to examine dissertations and thesis putting forward the relationship between problem behaviour of the children (12-year-old and younger) and teenagers (12-18 years old), and familial factors. For that purpose, 46 dissertations that were chosen according to the study criteria out of 141 dissertations scanned by using the keywords of ‘behaviour problems’ and ‘behaviour disorder’ at Higher Education Thesis Centre between the years of 1989 and 2016 have been taken into the scope of the study. ‘Thesis Examination Draft Form’ has been prepared for the purpose of being used for data collecting tool. For the analysis of the data, percentage, and frequency analysis methods have been used. When the results of these studies are evaluated on the whole, it is seen that all the dissertations and thesis done are descriptive study, and it was not encountered any studies designed as experimental. When looked at the distribution of dissertations by years, it is seen that the first thesis was done in 1989 and the most number of dissertations were done in the years of 2014 and 2016. When looked at the department in which the dissertations were done, it can be said that dissertations and thesis were done in many different fields of disciplines ranging from psychology and special education. In addition to this, when investigated the group taken into the scope of dissertations and thesis research, it is seen that the children mostly worked with are below the age of 12 and types of studies are master’s thesis. When the dissertations and thesis are examined by means of topics, it is seen that mostly-studied topics are demographic variables such as gender, whether the family is fragmented or not, education level of the family and the parents’ attitude. Obtained findings have been examined in the light of literature.Keywords: family, child behaviour problem, dissertations, thesis
Procedia PDF Downloads 2335712 A Multicriteria Analysis of Energy Poverty Index: A Case Study of Non-interconnected Zones in Colombia
Authors: Angelica Gonzalez O, Leonardo Rivera Cadavid, Diego Fernando Manotas
Abstract:
Energy poverty considers a population that does not have access to modern energy service. In particular, an area of a country that is not connected to the national electricity grid is known as a Non-Interconnected Zone (NIZ). Access to electricity has a significant impact on the welfare and development opportunities of the population. Different studies have shown that most health problems have an empirical cause and effect relationship with multidimensional energy poverty. Likewise, research has been carried out to review the consequences of not having access to electricity, and its results have concluded a statistically significant relationship between energy poverty and sources of drinking water, access to clean water, risks of mosquito bites, obesity, sterilization, marital status, occupation, and residence. Therefore, extensive research has been conducted in the construction of an energy poverty measure based on an index. Some of these studies introduce a Multidimensional Energy Poverty Index (MEPI), Compose Energy Poverty Index (CEPI), Low Income High Costs indicator (LIHC), among others. For this purpose, this study analyzes the energy poverty index using a multicriteria analysis determining the set of feasible alternatives - for which Colombia's ZNI will be used as a case study - to be considered in the problem and the set of relevant criteria in the characterization of the ZNI, from which the prioritization is obtained to determine the level of adjustment of each alternative with respect to the performance in each criterion. Additionally, this study considers the installation of Micro-Grids (MG). This is considered a straightforward solution to this problem because an MG is a local electrical grid, able to operate in grid-connected and island mode. Drawing on those insights, this study compares an energy poverty index considering an MG installation and calculates the impacts of different criterias in an energy poverty index in NIZ.Keywords: multicirteria, energy poverty, rural, microgrids, non-interconnect zones
Procedia PDF Downloads 1175711 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm
Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar
Abstract:
The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations
Procedia PDF Downloads 4155710 Environmental and Formal Conditions for the Development of Blue-green Infrastructure (BGI) in the Cities of Central Europe on the Example of Poland
Authors: Magdalena Biela, Marta Weber-Siwirska, Edyta Sierka
Abstract:
The current noticed trend in Central European countries, as in other regions of the world, is for people to migrate to cities. As a result, the urban population is to have reached 70% of the total by 2050. Due to this tendency, as well as taking high real estate prices and limited reserves of city green areas into consideration, the greenery and agricultural soil adjacent to cities is are to be devoted to housing projects, while city centres are expected to undergo partial depopulation. Urban heat islands and phenomena such as torrential rains may cause serious damage. They may even endanger the very life and health of the inhabitants. Due to these tangible effects of climate change, residents expect that local government takes action to develop green infrastructure (GI). The main purpose of our research has been to assess the degree of readiness on the part of the local government in Poland to develop BGI. A questionnaire using the CAWI method was prepared, and a survey was carried out. The target group were town hall employees in all 380 powiat cities and towns (380 county centres) in Poland. The form contained 14 questions covering, among others, actions taken to support the development of GI and ways of motivating residents to take such actions. 224 respondents replied to the questions. The results of the research show that 52% of the cities/towns have taken or intend to take measures to favour the development of green spaces. Currently, the installation of green roofs and living walls is are only carried out by 6 Polish cities, and a few more are at the stage of preparing appropriate regulations. The problem of rainwater retention is much more widespread. Among the municipalities declaring any activities for the benefit of GI, approximately 42% have decided to work on this problem. Over 19% of the respondents are planning an increase in the surface occupied by green areas, 14% - the installation of green roofs, and 12% - redevelopment of city greenery. It is optimistic that 67% of the respondents are willing to acquire knowledge about BGI by means of taking part in educational activities both at the national and international levels. There are many ways to help GI development. The most common type of support in the cities and towns surveyed is co-financing (35%), followed by full financing of projects (11%). About 15% of the cities declare only advisory support. Thus, the problem of GI in Central European cities is at the stage of initial development and requires advanced measures and implementation of both proven solutions applied in other European and world countries using the concept of Nature-based Solutions.Keywords: city/town, blue-green infrastructure, green roofs, climate change adaptation
Procedia PDF Downloads 212