Search results for: embedded network
2067 Adversary Emulation: Implementation of Automated Countermeasure in CALDERA Framework
Authors: Yinan Cao, Francine Herrmann
Abstract:
Adversary emulation is a very effective concrete way to evaluate the defense of an information system or network. It is about building an emulator, which depending on the vulnerability of a target system, will allow to detect and execute a set of identified attacks. However, emulating an adversary is very costly in terms of time and resources. Verifying the information of each technique and building up the countermeasures in the middle of the test is also needed to be accomplished manually. In this article, a synthesis of previous MITRE research on the creation of the ATT&CK matrix will be as the knowledge base of the known techniques and a well-designed adversary emulation software CALDERA based on ATT&CK Matrix will be used as our platform. Inspired and guided by the previous study, a plugin in CALDERA called Tinker will be implemented, which is aiming to help the tester to get more information and also the mitigation of each technique used in the previous operation. Furthermore, the optional countermeasures for some techniques are also implemented and preset in Tinker in order to facilitate and fasten the process of the defense improvement of the tested system.Keywords: automation, adversary emulation, CALDERA, countermeasures, MITRE ATT&CK
Procedia PDF Downloads 2082066 Utilizing Grid Computing to Enhance Power Systems Performance
Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima
Abstract:
Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting
Procedia PDF Downloads 4752065 Wastewater Treatment Using Sodom Apple Tree in Arid Regions
Authors: D. Oulhaci, M. Zehah, S. Meguellati
Abstract:
Collected by the sewerage network, the wastewater contains many polluting elements, coming from the population, commercial, industrial and agricultural activities. These waters are collected and discharged into the natural environment and pollute it. Hence the need to transport them before discharge to a treatment plant to undergo several treatment phases. The objective of this study is to highlight the purification performance of the "Sodom apple tree" which is a very common shrub in the region of Djanet and Illizi in Algeria. As material, we used small buckets filled with sand with a gravel substrate. We sowed seeds that we let grow a few weeks. The water supply is under a horizontal flow regime under-ground. The urban wastewater used is preceded by preliminary treatment. The water obtained after purification is collected using a tap in a container placed under the seal. The comparison between the inlet and the outlet waters showed that the presence of the Sodom apple tree contributes to reducing their pollutant parameters with significant rates: 81% for COD, 84%, for BOD , 95% for SM , 82% for NO⁻² , and 85% for NO⁻³ and can be released into the environment without risk of pollutionKeywords: arid zone, pollution, purification, re-use, wastewater.
Procedia PDF Downloads 802064 Expression of Ki-67 in Multiple Myeloma: A Clinicopathological Study
Authors: Kangana Sengar, Sanjay Deb, Ramesh Dawar
Abstract:
Introduction: Ki-67 can be a useful marker in determining proliferative activity in patients with multiple myeloma (MM). However, using Ki-67 alone results in the erroneous inclusion of non-myeloma cells leading to false high counts. We have used Dual IHC (immunohistochemistry) staining with Ki-67 and CD138 to enhance specificity in assessing proliferative activity of bone marrow plasma cells. Aims and objectives: To estimate the proportion of proliferating (Ki-67 expressing) plasma cells in patients with MM and correlation of Ki-67 with other known prognostic parameters. Materials and Methods: Fifty FFPE (formalin fixed paraffin embedded) blocks of trephine biopsies of cases diagnosed as MM from 2010 to 2015 are subjected to H & E staining and Dual IHC staining for CD 138 and Ki-67. H & E staining is done to evaluate various histological parameters like percentage of plasma cells, pattern of infiltration (nodular, interstitial, mixed and diffuse), routine parameters of marrow cellularity and hematopoiesis. Clinical data is collected from patient records from Medical Record Department. Each of CD138 expressing cells (cytoplasmic, red) are scored as proliferating plasma cells (containing a brown Ki¬67 nucleus) or non¬proliferating plasma cells (containing a blue, counter-stained, Ki-¬67 negative nucleus). Ki-67 is measured as percentage positivity with a maximum score of hundred percent and lowest of zero percent. The intensity of staining is not relevant. Results: Statistically significant correlation of Ki-67 in D-S Stage (Durie & Salmon Stage) I vs. III (p=0.026) and ISS (International Staging System) Stage I vs. III (p=0.019), β2m (p=0.029) and percentage of plasma cells (p < 0.001) is seen. No statistically significant correlation is seen between Ki-67 and hemoglobin, platelet count, total leukocyte count, total protein, albumin, S. calcium, S. creatinine, S. LDH, blood urea and pattern of infiltration. Conclusion: Ki-67 index correlated with other known prognostic parameters. However, it is not determined routinely in patients with MM due to little information available regarding its relevance and paucity of studies done to correlate with other known prognostic factors in MM patients. To the best of our knowledge, this is the first study in India using Dual IHC staining for Ki-67 and CD138 in MM patients. Routine determination of Ki-67 will help to identify patients who may benefit with more aggressive therapy. Recommendation: In this study follow up of patients is not included, and the sample size is small. Studying with larger sample size and long follow up is advocated to prognosticate Ki-67 as a marker of survival in patients with multiple myeloma.Keywords: bone marrow, dual IHC, Ki-67, multiple myeloma
Procedia PDF Downloads 1552063 Glossematics and Textual Structure
Authors: Abdelhadi Nadjer
Abstract:
The structure of the text to the systemic school -(glossématique-Helmslev). At the beginning of the note we have a cursory look around the concepts of general linguistics The science that studies scientific study of human language based on the description and preview the facts away from the trend of education than we gave a detailed overview the founder of systemic school and most important customers and more methods and curriculum theory and analysis they extend to all humanities, practical action each offset by a theoretical and the procedure can be analyzed through the elements that pose as another method we talked to its links with other language schools where they are based on the sharp criticism of the language before and deflected into consideration for the field of language and its erection has outside or language network and its participation in the actions (non-linguistic) and after that we started our Valglosamatik analytical structure of the text is ejected text terminal or all of the words to was put for expression. This text Negotiable divided into types in turn are divided into classes and class should not be carrying a contradiction and be inclusive. It is on the same materials as described relationships that combine language and seeks to describe their relations and identified.Keywords: text, language schools, linguistics, human language
Procedia PDF Downloads 4592062 Sub-Optimum Safety Performance of a Construction Project: A Multilevel Exploration
Authors: Tas Yong Koh, Steve Rowlinson, Yuzhong Shen
Abstract:
In construction safety management, safety climate has long been linked to workers' safety behaviors and performance. For this reason, safety climate concept and tools have been used as heuristics to diagnose a range of safety-related issues by some progressive contractors in Hong Kong and elsewhere. However, as a diagnostic tool, safety climate tends to treat the different components of the climate construct in a linear fashion. Safety management in construction projects, in reality, is a multi-faceted and multilevel phenomenon that resembles a complex system. Hence, understanding safety management in construction projects requires not only the understanding of safety climate but also the organizational-systemic nature of the phenomenon. Our involvement, diagnoses, and interpretations of a range of safety climate-related issues which culminated in the project’s sub-optimum safety performance in an infrastructure construction project have brought about such revelation. In this study, a range of data types had been collected from various hierarchies of the project site organization. These include the frontline workers and supervisors from the main and sub-contractors, and the client supervisory personnel. Data collection was performed through the administration of safety climate questionnaire, interviews, observation, and document study. The findings collectively indicate that what had emerged in parallel of the seemingly linear climate-based exploration is the exposition of the organization-systemic nature of the phenomenon. The results indicate the negative impacts of climate perceptions mismatch, insufficient work planning, and risk management, mixed safety leadership, workforce negative attributes, lapsed safety enforcement and resources shortages collectively give rise to the project sub-optimum safety performance. From the dynamic causation and multilevel perspective, the analyses show that the individual, group, and organizational levels issues are interrelated and these interrelationships are linked to negative safety climate. Hence the adoption of both perspectives has enabled a fuller understanding of the phenomenon of safety management that point to the need for an organizational-systemic intervention strategy. The core message points to the fact that intervention at an individual level will only meet with limited success if the risks embedded in the higher levels in group and project organization are not addressed. The findings can be used to guide the effective development of safety infrastructure by linking different levels of systems in a construction project organization.Keywords: construction safety management, dynamic causation, multilevel analysis, safety climate
Procedia PDF Downloads 1752061 Towards a Security Model against Denial of Service Attacks for SIP Traffic
Authors: Arellano Karina, Diego Avila-Pesántez, Leticia Vaca-Cárdenas, Alberto Arellano, Carmen Mantilla
Abstract:
Nowadays, security threats in Voice over IP (VoIP) systems are an essential and latent concern for people in charge of security in a corporate network, because, every day, new Denial-of-Service (DoS) attacks are developed. These affect the business continuity of an organization, regarding confidentiality, availability, and integrity of services, causing frequent losses of both information and money. The purpose of this study is to establish the necessary measures to mitigate DoS threats, which affect the availability of VoIP systems, based on the Session Initiation Protocol (SIP). A Security Model called MS-DoS-SIP is proposed, which is based on two approaches. The first one analyzes the recommendations of international security standards. The second approach takes into account weaknesses and threats. The implementation of this model in a VoIP simulated system allowed to minimize the present vulnerabilities in 92% and increase the availability time of the VoIP service into an organization.Keywords: Denial-of-Service SIP attacks, MS-DoS-SIP, security model, VoIP-SIP vulnerabilities
Procedia PDF Downloads 2032060 Particle Dust Layer Density and the Optical Wavelength Absorption Relationship in Photovoltaic Module
Authors: M. Mesrouk, A. Hadj Arab
Abstract:
This work allows highlight the effect of dust on the absorption of the optical spectrum on the photovoltaic module, the effect of the particles dust presence on the photovoltaic modules have been a microscopic scale studied with COMSOL Multi-physic software simulation. In this paper, we have supposed the dust layer as a diffraction network repetitive optical structure characterized by the spacing between particle which represented by 'd' and the simulated structure (air-dust particle-glass). In this study we can observe the relationship between the wavelength and the particle spacing, the simulation shows us that the maximum wavelength transmission value corresponding, λ0 = 400nm, which represent the spacing value between the particles dust, d = 400 nm. In fact, we can observe that while increase dust layer density the wavelength transmission value decrease, there is a relationship between the density and wavelength value which can be absorbed in a dusty photovoltaic panel.Keywords: dust effect, photovoltaic module, spectral absorption, wavelength transmission
Procedia PDF Downloads 4632059 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea
Authors: Masoud Sakhinia, Sajjad Ahmad
Abstract:
Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR
Procedia PDF Downloads 2782058 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams
Authors: Yu-Chen Wei
Abstract:
The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.Keywords: human capital divergence, team human capital, team performance, team level research
Procedia PDF Downloads 2402057 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation
Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos
Abstract:
One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).Keywords: code generation, MATLAB, tunable parameters, TwinCAT
Procedia PDF Downloads 2282056 An Experimental Testbed Using Virtual Containers for Distributed Systems
Authors: Parth Patel, Ying Zhu
Abstract:
Distributed systems have become ubiquitous, and they continue their growth through a range of services. With advances in resource virtualization technology such as Virtual Machines (VM) and software containers, developers no longer require high-end servers to test and develop distributed software. Even in commercial production, virtualization has streamlined the process of rapid deployment and service management. This paper introduces a distributed systems testbed that utilizes virtualization to enable distributed systems development on commodity computers. The testbed can be used to develop new services, implement theoretical distributed systems concepts for understanding, and experiment with virtual network topologies. We show its versatility through two case studies that utilize the testbed for implementing a theoretical algorithm and developing our own methodology to find high-risk edges. The results of using the testbed for these use cases have proven the effectiveness and versatility of this testbed across a range of scenarios.Keywords: distributed systems, experimental testbed, peer-to-peer networks, virtual container technology
Procedia PDF Downloads 1462055 The Development of Community Leadership Strategies for Career Development of the Benjarong Pottery Products in Eight Upper Central Provinces
Authors: Thanaporn Chaimongkol
Abstract:
The objective of this research was aimed to examine the factors that influence the development of community leadership strategies to further develop the career regarding the Benjarong pottery products in eight upper central provinces, Thailand. The sample included (1) 1200 Benjarong pottery operators, (2) 30 involved representatives at both the policy level and support, and (3) OTOP network of 24 people. In this quantitative study, investigating data was conducted on individual session basis. The research instruments used included questionnaires and interview. The results showed that the components of the development of the community leadership strategies for career development of the Benjarong pottery products in eight upper central provinces were high overall; the Five Competitive Forces were of the highest average, followed by bargaining power of suppliers, and McKinsey 7's framework, respectively; where the highest average was strategy.Keywords: community leadership, strategy development, Benjarong Pottery, 8 upper central provinces
Procedia PDF Downloads 3252054 Product Modularity, Collaboration and the Impact on Innovation Performance in Intra-Organizational R&D Networks
Authors: Daniel Martinez, Tim de Leeuw, Stefan Haefliger
Abstract:
The challenges of managing a large and geographically dispersed R&D organization have been further increasing during the past years, concentrating on the leverage of a geo-graphically dispersed body of knowledge in an efficient and effective manner. In order to reduce complexity and improve performance, firms introduce product modularity as one key element for global R&D network teams to develop their products and projects in collaboration. However, empirical studies on the effects of product modularity on innovation performance are really scant. Furthermore, some researchers have suggested that product modularity promotes innovation performance, while others argue that it inhibits innovation performance. This research fills this gap by investigating the impact of product modularity on various dimensions of innovation performance, i.e. effectiveness and efficiency. By constructing the theoretical framework, this study suggests that that there is an inverted U-shaped relationship between product modularity and innovation performance. Moreover, this research work suggests that the optimum of innovation performance efficiency will be at a higher level than innovation performance effectiveness at a given product modularity level.Keywords: modularity, innovation performance, networks, R&D, collaboration
Procedia PDF Downloads 5202053 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia
Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic
Abstract:
This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro
Procedia PDF Downloads 2522052 Multi-Scale Urban Spatial Evolution Analysis Based on Space Syntax: A Case Study in Modern Yangzhou, China
Authors: Dai Zhimei, Hua Chen
Abstract:
The exploration of urban spatial evolution is an important part of urban development research. Therefore, the evolutionary modern Yangzhou urban spatial texture was taken as the research object, and Spatial Syntax was used as the main research tool, this paper explored Yangzhou spatial evolution law and its driving factors from the urban street network scale, district scale and street scale. The study has concluded that at the urban scale, Yangzhou urban spatial evolution is the result of a variety of causes, including physical and geographical condition, policy and planning factors, and traffic conditions, and the evolution of space also has an impact on social, economic, environmental and cultural factors. At the district and street scales, changes in space will have a profound influence on the history of the city and the activities of people. At the end of the article, the matters needing attention during the evolution of urban space were summarized.Keywords: block, space syntax and methodology, street, urban space, Yangzhou
Procedia PDF Downloads 1812051 Determination of Frequency Relay Setting during Distributed Generators Islanding
Authors: Tarek Kandil, Ameen Ali
Abstract:
Distributed generation (DG) has recently gained a lot of momentum in power industry due to market deregulation and environmental concerns. One of the most technical challenges facing DGs is islanding of distributed generators. The current industry practice is to disconnect all distributed generators immediately after the occurrence of islands within 200 to 350 ms after loss of main supply. To achieve such goal, each DG must be equipped with an islanding detection device. Frequency relays are one of the most commonly used loss of mains detection method. However, distribution utilities may be faced with concerns related to false operation of these frequency relays due to improper settings. The commercially available frequency relays are considering standard tight setting. This paper investigates some factors related to relays internal algorithm that contribute to their different operating responses. Further, the relay operation in the presence of multiple distributed at the same network is analyzed. Finally, the relay setting can be accurately determined based on these investigation and analysis.Keywords: frequency relay, distributed generation, islanding detection, relay setting
Procedia PDF Downloads 5342050 Whale Optimization Algorithm for Optimal Reactive Power Dispatch Solution Under Various Contingency Conditions
Authors: Medani Khaled Ben Oualid
Abstract:
Most of researchers solved and analyzed the ORPD problem in the normal conditions. However, network collapses appear in contingency conditions. In this paper, ORPD under several contingencies is presented using the proposed method WOA. To ensure viability of the power system in contingency conditions, several critical cases are simulated in order to prevent and prepare the power system to face such situations. The results obtained are carried out in IEEE 30 bus test system for the solution of ORPD problem in which control of bus voltages, tap position of transformers and reactive power sources are involved. Moreover, another method, namely, Particle Swarm Optimization with Time Varying Acceleration Coefficient (PSO-TVAC) has been compared with the proposed technique. Simulation results indicate that the proposed WOA gives remarkable solution in terms of effectiveness in case of outages.Keywords: optimal reactive power dispatch, metaheuristic techniques, whale optimization algorithm, real power loss minimization, contingency conditions
Procedia PDF Downloads 902049 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 1112048 Chaos Cryptography in Cloud Architectures with Lower Latency
Authors: Mohammad A. Alia
Abstract:
With the rapid evolution of the internet applications, cloud computing becomes one of today’s hottest research areas due to its ability to reduce costs associated with computing. Cloud is, therefore, increasing flexibility and scalability for computing services in the internet. Cloud computing is Internet based computing due to shared resources and information which are dynamically delivered to consumers. As cloud computing share resources via the open network, hence cloud outsourcing is vulnerable to attack. Therefore, this paper will explore data security of cloud computing by implementing chaotic cryptography. The proposal scenario develops a problem transformation technique that enables customers to secretly transform their information. This work proposes the chaotic cryptographic algorithms have been applied to enhance the security of the cloud computing accessibility. However, the proposed scenario is secure, easy and straightforward process. The chaotic encryption and digital signature systems ensure the security of the proposed scenario. Though, the choice of the key size becomes crucial to prevent a brute force attack.Keywords: chaos, cloud computing, security, cryptography
Procedia PDF Downloads 3452047 Evaluation of Urban Parks Based on POI Data: Taking Futian District of Shenzhen as an Example
Authors: Juanling Lin
Abstract:
The construction of urban parks is an important part of eco-city construction, and the intervention of big data provides a more scientific and rational platform for the assessment of urban parks by identifying and correcting the irrationality of urban park planning from the macroscopic level and then promoting the rational planning of urban parks. The study builds an urban park assessment system based on urban road network data and POI data, taking Futian District of Shenzhen as the research object, and utilizes the GIS geographic information system to assess the park system of Futian District in five aspects: park spatial distribution, accessibility, service capacity, demand, and supply-demand relationship. The urban park assessment system can effectively reflect the current situation of urban park construction and provide a useful exploration for realizing the rationality and fairness of urban park planning.Keywords: urban parks, assessment system, POI, supply and demand
Procedia PDF Downloads 422046 Delving into the Concept of Social Capital in the Smart City Research
Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh
Abstract:
Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.Keywords: smart city, urban digitalisation, ICT, social capital
Procedia PDF Downloads 142045 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 672044 An Economic Way to Toughen Poly Acrylic Acid Superabsorbent Polymer Using Hyper Branched Polymer
Authors: Nazila Dehbari, Javad Tavakoli, Yakani Kambu, Youhong Tang
Abstract:
Superabsorbent hydrogels (SAP), as an enviro-sensitive material have been widely used for industrial and biomedical applications due to their unique structure and capabilities. Poor mechanical properties of SAPs - which is extremely related to their large volume change – count as a great weakness in adopting for high-tech applications. Therefore, improving SAPs’ mechanical properties via toughening methods by mixing different types of cross-linked polymer or introducing energy-dissipating mechanisms is highly focused. In this work, in order to change the intrinsic brittle character of commercialized Poly Acrylic Acid (here as SAP) to be semi-ductile, a commercial available highly branched tree-like dendritic polymers with numerous –OH end groups known as hyper-branched polymer (HB) has been added to PAA-SAP system in a single step, cost effective and environment friendly solvent casting method. Samples were characterized by FTIR, SEM and TEM and their physico-chemical characterization including swelling capabilities, hydraulic permeability, surface tension and thermal properties had been performed. Toughness energy, stiffness, elongation at breaking point, viscoelastic properties and samples extensibility were mechanical properties that had been performed and characterized as a function of samples lateral cracks’ length in different HB concentration. Addition of HB to PAA-SAP significantly improved mechanical and surface properties. Increasing equilibrium swelling ratio by about 25% had been experienced by the SAP-HB samples in comparison with SAPs; however, samples swelling kinetics remained without changes as initial rate of water uptake and equilibrium time haven’t been subjected to any changes. Thermal stability analysis showed that HB is participating in hybrid network formation while improving mechanical properties. Samples characterization by TEM showed that, the aggregated HB polymer binders into nano-spheres with diameter in range of 10–200 nm. So well dispersion in the SAP matrix occurred as it was predictable due to the hydrophilic character of the numerous hydroxyl groups at the end of HB which enhance the compatibility of HB with PAA-SAP. As the profused -OH groups in HB could react with -COOH groups in the PAA-SAP during the curing process, the formation of a 2D structure in the SAP-HB could be attributed to the strong interfacial adhesion between HB and the PAA-SAP matrix which hinders the activity of PAA chains (SEM analysis). FTIR spectra introduced new peaks at 1041 and 1121 cm-1 that attributed to the C–O(–OH) stretching hydroxyl and O–C stretching ester groups of HB polymer binder indicating the incorporation of HB polymer into the SAP structure. SAP-HB polymer has significant effects on the final mechanical properties. The brittleness of PAA hydrogels are decreased by introducing HB as the fracture energies of hydrogels increased from 8.67 to 26.67. PAA-HBs’ stretch ability enhanced about 10 folds while reduced as a function of different notches depth.Keywords: superabsorbent polymer, toughening, viscoelastic properties, hydrogel network
Procedia PDF Downloads 3232043 Examination of Relationship between Internet Addiction and Cyber Bullying in Adolescents
Authors: Adem Peker, Yüksel Eroğlu, İsmail Ay
Abstract:
As the information and communication technologies have become embedded in everyday life of adolescents, both their possible benefits and risks to adolescents are being identified. The information and communication technologies provide opportunities for adolescents to connect with peers and to access to information. However, as with other social connections, users of information and communication devices have the potential to meet and interact with in harmful ways. One emerging example of such interaction is cyber bullying. Cyber bullying occurs when someone uses the information and communication technologies to harass or embarrass another person. Cyber bullying can take the form of malicious text messages and e-mails, spreading rumours, and excluding people from online groups. Cyber bullying has been linked to psychological problems for cyber bullies and victims. Therefore, it is important to determine how internet addiction contributes to cyber bullying. Building on this question, this study takes a closer look at the relationship between internet addiction and cyber bullying. For this purpose, in this study, based on descriptive relational model, it was hypothesized that loss of control, excessive desire to stay online, and negativity in social relationships, which are dimensions of internet addiction, would be associated positively with cyber bullying and victimization. Participants were 383 high school students (176 girls and 207 boys; mean age, 15.7 years). Internet addiction was measured by using Internet Addiction Scale. The Cyber Victim and Bullying Scale was utilized to measure cyber bullying and victimization. The scales were administered to the students in groups in the classrooms. In this study, stepwise regression analyses were utilized to examine the relationships between dimensions of internet addiction and cyber bullying and victimization. Before applying stepwise regression analysis, assumptions of regression were verified. According to stepwise regression analysis, cyber bullying was predicted by loss of control (β=.26, p<.001) and negativity in social relationships (β=.13, p<.001). These variables accounted for 9 % of the total variance, with the loss of control explaining the higher percentage (8 %). On the other hand, cyber victimization was predicted by loss of control (β=.19, p<.001) and negativity in social relationships (β=.12, p<.001). These variables altogether accounted for 8 % of the variance in cyber victimization, with the best predictor loss of control (7 % of the total variance). The results of this study demonstrated that, as expected, loss of control and negativity in social relationships predicted cyber bullying and victimization positively. However, excessive desire to stay online did not emerge a significant predictor of both cyberbullying and victimization. Consequently, this study would enhance our understanding of the predictors of cyber bullying and victimization since the results proposed that internet addiction is related with cyber bullying and victimization.Keywords: cyber bullying, internet addiction, adolescents, regression
Procedia PDF Downloads 3102042 A Blockchain-Based Privacy-Preserving Physical Delivery System
Authors: Shahin Zanbaghi, Saeed Samet
Abstract:
The internet has transformed the way we shop. Previously, most of our purchases came in the form of shopping trips to a nearby store. Now, it’s as easy as clicking a mouse. But with great convenience comes great responsibility. We have to be constantly vigilant about our personal information. In this work, our proposed approach is to encrypt the information printed on the physical packages, which include personal information in plain text, using a symmetric encryption algorithm; then, we store that encrypted information into a Blockchain network rather than storing them in companies or corporations centralized databases. We present, implement and assess a blockchain-based system using Ethereum smart contracts. We present detailed algorithms that explain the details of our smart contract. We present the security, cost, and performance analysis of the proposed method. Our work indicates that the proposed solution is economically attainable and provides data integrity, security, transparency, and data traceability.Keywords: blockchain, Ethereum, smart contract, commit-reveal scheme
Procedia PDF Downloads 1502041 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network
Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert
Abstract:
The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy
Procedia PDF Downloads 1352040 Investigated Optimization of Davidson Path Loss Model for Digital Terrestrial Television (DTTV) Propagation in Urban Area
Authors: Pitak Keawbunsong, Sathaporn Promwong
Abstract:
This paper presents an investigation on the efficiency of the optimized Davison path loss model in order to look for a suitable path loss model to design and planning DTTV propagation for small and medium urban areas in southern Thailand. Hadyai City in Songkla Province is chosen as the case study to collect the analytical data on the electric field strength. The optimization is conducted through the least square method while the efficiency index is through the statistical value of relative error (RE). The result of the least square method is the offset and slop of the frequency to be used in the optimized process. The statistical result shows that RE of the old Davidson model is at the least when being compared with the optimized Davison and the Hata models. Thus, the old Davison path loss model is the most accurate that further becomes the most optimized for the plan on the propagation network design.Keywords: DTTV propagation, path loss model, Davidson model, least square method
Procedia PDF Downloads 3382039 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir
Abstract:
NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures
Procedia PDF Downloads 2292038 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 139