Search results for: gas dynamic virtual nozzle (GDVN) principle
920 Symbolic Status of Architectural Identity: Example of Famagusta Walled City
Authors: Rafooneh Mokhtarshahi Sani
Abstract:
This study explores how the residents of a conserved urban area have used goods and ideas as resources to maintain an enviable architectural identity. Whereas conserved urban quarters are seen as role model for maintaining architectural identity, the article describes how their residents try to give a contemporary modern image to their homes. It is argued that despite the efforts of authorities and decision makers to keep and preserve the traditional architectural identity in conserved urban areas, people have already moved on and have adjusted their homes with their preferred architectural taste. Being through such conflict of interests, have put the future of architectural identity in such places at risk. The thesis is that, on the one hand, such struggle over a desirable symbolic status in identity formation is taking place, and, on the other, it is continuously widening the gap between the real and ideal identity in the built environment. The study then analytically connects the concept of symbolic status to current identity debates. As an empirical research, this study uses systematic social and physical observation methods to describe and categorize the characteristics of settlements in Walled City of Famagusta, which symbolically represent the modern houses. The Walled City is a cultural heritage site, which most of its urban context has been conserved. Traditional houses in this area demonstrate the identity of North Cyprus architecture. The conserved residential buildings, however, either has been abandoned or went through changes by their users to present the ideal image of contemporary life. In the concluding section, the article discusses the differences between the symbolic status of people and authorities in defining a culturally valuable contemporary home. And raises the question of whether we can talk at all about architectural identity in terms of conserving the traditional style, and how we may do so on the basis of dynamic nature of identity and the necessity of its acceptance by the users.Keywords: symbolic status, architectural identity, conservation, facades, Famagusta walled city
Procedia PDF Downloads 356919 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces
Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi
Abstract:
For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction
Procedia PDF Downloads 96918 Strategic Alliances and Creative Synergy within European Union: A Theoretical Perspective
Authors: Maha Tichetti, Barzi Redouane, Selim Kanat
Abstract:
In the European Union (EU), where economic, political, and cultural ties converge, strategic alliances play a pivotal role in shaping the collaborative landscape. This paper embarks on a journey into the EuroSphere, offering a comprehensive analysis review that unravels the dynamics of these alliances within the European context. The focus is specifically directed towards understanding their profound impact on creative synergy and innovation among teams. In our analysis, we provide theoretical explanations for key terms such as "creative synergy" and "strategic alliances." We outline various types of competitive strategies, delve into the motivations prompting the formation of strategic alliances, and critically examine the success and failure factors in these kinds of collaboration. Additionally, we explore the goals achievable through strategic alliances, especially in the context of external growth. A central focus of this paper focus on how strategic alliances can significantly impact creative synergy within the European landscape. Through a theoretical lens, we explore the interplay between collaborative strategies and the enhancement of creative thinking within teams engaged in strategic alliances. The article goes beyond theoretical frameworks to present a tangible example of a strategic alliance emerging in the European market. This case study illuminates how such alliances have empowered European companies to enhance their competitive positions on the global stage while concurrently fostering creative synergy among their teams. This comprehensive review not only contributes to the theoretical understanding of strategic alliances and creative synergy but also offers practical insights for businesses navigating the collaborative landscape within the EuroSphere. As we unravel the complexities of these alliances, we uncover valuable lessons and opportunities for future research, providing a roadmap for those seeking to harness the full potential of strategic collaborations in the dynamic European context.Keywords: European Union, strategic alliances, creative synergy, competitiveness
Procedia PDF Downloads 66917 The Role of Transport Investment and Enhanced Railway Accessibility in Regional Efficiency Improvement in Saudi Arabia: Data Envelopment Analysis
Authors: Saleh Alotaibi, Mohammed Quddus, Craig Morton, Jobair Bin Alam
Abstract:
This paper explores the role of large-scale investment in transport sectors and the impact of increased railway accessibility on the efficiency of the regional economic productivity in the Kingdom of Saudi Arabia (KSA). There are considerable differences among the KSA regions in terms of their levels of investment and productivity due to their geographical scale and location, which in turn greatly affect their relative efficiency. The study used a non-parametric linear programming technique - Data Envelopment Analysis (DEA) - to measure the regional efficiency change over time and determine the drivers of inefficiency and their scope of improvement. In addition, Window DEA analysis is carried out to compare the efficiency performance change for various time periods. Malmquist index (MI) is also analyzed to identify the sources of productivity change between two subsequent years. The analysis involves spatial and temporal panel data collected from 1999 to 2018 for the 13 regions of the country. Outcomes reveal that transport investment and improved railway accessibility, in general, have significantly contributed to regional economic development. Moreover, the endowment of the new railway stations has spill-over effects. The DEA Window analysis confirmed the dynamic improvement in the average regional efficiency over the study periods. MI showed that the technical efficiency change was the main source of regional productivity improvement. However, there is evidence of investment allocation discrepancy among regions which could limit the achievement of development goals in the long term. These relevant findings will assist the Saudi government in developing better strategic decisions for future transport investments and their allocation at the regional level.Keywords: data envelopment analysis, transport investment, railway accessibility, efficiency
Procedia PDF Downloads 149916 Viscoelastic Properties of Sn-15%Pb Measured in an Oscillation Test
Authors: Gerardo Sanjuan Sanjuan, Ángel Enrique Chavéz Castellanos
Abstract:
The knowledge of the rheological behavior of partially solidified metal alloy is an important issue when modeling and simulation of die filling in semisolid processes. Many experiments for like steady state, the step change in shear rate tests, shear stress ramps have been carried out leading that semi-solid alloys exhibit shear thinning, thixotropic behavior and yield stress. More advanced investigation gives evidence some viscoelastic features can be observed. The viscoelastic properties of materials are determinate by transient or dynamic methods; unfortunately, sparse information exists about oscillation experiments. The aim of this present work is to use small amplitude oscillatory tests for knowledge properties such as G´ and G´´. These properties allow providing information about materials structure. For this purpose, we investigated tin-lead alloy (Sn-15%Pb) which exhibits a similar microstructure to aluminum alloys and is the classic alloy for semisolid thixotropic studies. The experiments were performed with parallel plates rheometer AR-G2. Initially, the liquid alloy is cooled down to the semisolid range, a specific temperature to guarantee a constant fraction solid. Oscillation was performed within the linear viscoelastic regime with a strain sweep. So, the loss modulus G´´, the storage modulus G´ and the loss angle (δ) was monitored. In addition a frequency sweep at a strain below the critical strain for characterized its structure. This provides more information about the interactions among solid particles on a liquid matrix. After testing, the sample was removed then cooled, sectioned and examined metallographically. These experiments demonstrate that the viscoelasticity is sensitive to the solid fraction, and is strongly influenced by the shape and size of particles solid.Keywords: rheology, semisolid alloys, thixotropic, viscoelasticity
Procedia PDF Downloads 376915 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 106914 The Greek Revolution Through the Foreign Press. The Case of the Newspaper "The London Times" In the Period 1821-1828
Authors: Euripides Antoniades
Abstract:
In 1821 the Greek Revolution movement, under the political influence that arose from the French revolution, and the corresponding movements in Italy, Germany and America, requested the liberation of the nation and the establishment of an independent national state. Published topics in the British press regarding the Greek Revolution, focused on : a) the right of the Greeks to claim their freedom from Turkish domination in order to establish an independent state based on the principle of national autonomy, b) criticism regarding Turkish rule as illegal and the power of the Ottoman Sultan as arbitrary, c) the recognition of the Greek identity and its distinction from the Turkish one and d) the endorsement Greeks as the descendants of ancient Greeks. The advantage of newspaper as a media is sharing information and ideas and dealing with issues in greater depth and detail, unlike other media, such as radio or television. The London Times is a print publication that presents, in chronological or thematic order, the news, opinions or announcements about the most important events that have occurred in a place during a specified period of time. This paper employs the rich archive of The London Times archive by quoting extracts from publications of that period, to convey the British public perspective regarding the Greek Revolution from its beginning until the London Protocol of 1828. Furthermore, analyses the publications of the British newspaper in terms of the number of references to the Greek revolution, front page and editorial references as well as the size of publications on the revolution during the period 1821-1828. A combination of qualitative and quantitative content analysis was applied. An attempt was made to record Greek Revolution references along with the usage of specific words and expressions that contribute to the representation of the historical events and their exposure to the reading public. Key finds of this research reveal that a) there was a frequency of passionate daily articles concerning the events in Greece, their length, and context in The London Times, b) the British public opinion was influenced by this particular newspaper and c) the newspaper published various news about the revolution by adopting the role of animator of the Greek struggle. For instance, war events and the battles of Wallachin and Moldavia, Hydra, Crete, Psara, Mesollogi, Peloponnese were presented not only for informing the readers but for promoting the essential need for freedom and the establishment of an independent Greek state. In fact, this type of news was the main substance of the The London Times’ structure, establishing a positive image about the Greek Revolution contributing to the European diplomatic development such as the standpoint of France, - that did not wish to be detached from the conclusions regarding the English loans and the death of Alexander I of Russia and his succession by the ambitious Nicholas. These factors offered a change in the attitude of the British and Russians respectively assuming a positive approach towards Greece. The Great Powers maintained a neutral position in the Greek-Ottoman conflict, same time they engaged in Greek power increasement by offering aid.Keywords: Greece, revolution, newspaper, the London times, London, great britain, mass media
Procedia PDF Downloads 90913 Paradigmatic Approach University Management from the Perspective of Strategic Management: A Research in the Marmara Region in Turkey
Authors: Recep Yücel, Cihat Kartal, Mustafa Kara
Abstract:
On the basis of strategic management, it is believed in the necessity of a number of innovations in the postmodern management approach in the management of universities in our country. In this sense, some of these requirements are the integration of public and private universities, international integration, R & D status and increasing young population will create a dynamic structure. According to the postmodern management approach, universities, in our country despite being governed by the classical approach autonomous universities; academically are thought solid, to be non-hierarchical and creative. In fact, studies that require a multidisciplinary academic environment, universities and there is a close cooperation between formal and non-formal sub-units. Moreover, terms of postmodern management approaches, the requirements specified in the direction of solving the problem of an increasing number of universities in our country is considered to be more difficult. Therefore, considering the psychological impact on the academic personnel the university organizational structure, the study are trying to aim to propose an appropriate model of university organization. In this context, the study sought to answer the question how to have an impact innovation and international integration on the academic achievement of the classical organizational structure. Finally, in the study, due to the adoption of the classical organizational structure of the university, integration is considered to be difficult, academic cooperation between universities at the international level and maintaining it. In addition, it was understood that block the efforts of this organization structure, academic motivation, development and innovation. In this study under these purposes; on the basis of the existing organization and management structure of the universities in the Marmara Region in Turkey, a study was conducted with qualitative research methods. The data have been analyzed using content analysis and assessment was based on the results obtained.Keywords: university, strategic management, postmodern management approaches, multidisciplinary studies
Procedia PDF Downloads 395912 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm
Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz
Abstract:
Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations
Procedia PDF Downloads 135911 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry
Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard
Abstract:
Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor
Procedia PDF Downloads 327910 Potentiometric Determination of Moxifloxacin in Some Pharmaceutical Formulation Using PVC Membrane Sensors
Authors: M. M. Hefnawy, A. M. A. Homoda, M. A. Abounassif, A. M. Alanazia, A. Al-Majed, Gamal A. E. Mostafa
Abstract:
PVC membrane sensors using different approach e.g. ion-pair, ionophore, and Schiff-base has been used as testing membrane sensor. Analytical applications of membrane sensors for direct measurement of variety of different ions in complex biological and environmental sample are reported. The most important step of such PVC membrane sensor is the sensing active material. The potentiometric sensors have some outstanding advantages including simple design, operation, wide linear dynamic range, relative fast response time, and rotational selectivity. The analytical applications of these techniques to pharmaceutical compounds in dosage forms are also discussed. The construction and electrochemical response characteristics of Poly (vinyl chloride) membrane sensors for moxifloxacin HCl (MOX) are described. The sensing membranes incorporate ion association complexes of moxifloxacin cation and sodium tetraphenyl borate (NaTPB) (sensor 1), phosphomolybdic acid (PMA) (sensor 2) or phosphotungstic acid (PTA) (sensor 3) as electroactive materials. The sensors display a fast, stable and near-Nernstian response over a relative wide moxifloxacin concentration range (1 ×10-2-4.0×10-6, 1 × 10-2-5.0×10-6, 1 × 10-2-5.0×10-6 M), with detection limits of 3×10-6, 4×10-6 and 4.0×10-6 M for sensor 1, 2 and 3, respectively over a pH range of 6.0-9.0. The sensors show good discrimination of moxifloxacin from several inorganic and organic compounds. The direct determination of 400 µg/ml of moxifloxacin show an average recovery of 98.5, 99.1 and 98.6 % and a mean relative standard deviation of 1.8, 1.6 and 1.8% for sensors 1, 2, and 3 respectively. The proposed sensors have been applied for direct determination of moxifloxacin in some pharmaceutical preparations. The results obtained by determination of moxifloxacin in tablets using the proposed sensors are comparable favorably with those obtained using the US Pharmacopeia method. The sensors have been used as indicator electrodes for potentiometric titration of moxifloxacin.Keywords: potentiometry, PVC, membrane sensors, ion-pair, ionophore, schiff-base, moxifloxacin HCl, sodium tetraphenyl borate, phosphomolybdic acid, phosphotungstic acid
Procedia PDF Downloads 439909 Democratic Action as Insurgency: On Claude Lefort's Concept of the Political Regime
Authors: Lorenzo Buti
Abstract:
This paper investigates the nature of democratic action through a critical reading of Claude Lefort’s notion of the democratic ‘regime’. Lefort provides one of the most innovative accounts of the essential features of a democratic regime. According to him, democracy is a political regime that acknowledges the indeterminacy of a society and stages it as a contestation between competing political actors. As such, democracy provides the symbolic markers of society’s openness towards the future. However, despite their democratic features, the recent decades in late capitalist societies attest to a sense of the future becoming fixed and predetermined. This suggests that Lefort’s conception of democracy harbours a misunderstanding of the character and experience of democratic action. This paper examines this underlying tension in Lefort’s work. It claims that Lefort underestimates how a democratic regime, next to its symbolic function, also takes a materially constituted form with its particular dynamics of power relations. Lefort’s systematic dismissal of this material dimension for democratic action can lead to the contemporary paradoxical situation where democracy’s symbolic markers are upheld (free elections, public debate, dynamic between government and opposition in parliament,…) but the room for political decision-making is constrained due to a myriad of material constraints (e.g., market pressures, institutional inertias). The paper draws out the implications for the notion of democratic action. Contra Lefort, it argues that democratic action necessarily targets the material conditions that impede the capacity for decision-making on the basis of equality and liberty. This analysis shapes our understanding of democratic action in two ways. First, democratic action takes an asymmetrical, insurgent form, as a contestation of material power relations from below. Second, it reveals an ambivalent position vis-à-vis the political regime: democratic action is symbolically made possible by the democratic dispositive, but it contests the constituted form that the democratic regime takes.Keywords: Claude Lefort, democratic action, material constitution, political regime
Procedia PDF Downloads 141908 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 375907 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices
Authors: Neda Nabid
Abstract:
Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage
Procedia PDF Downloads 226906 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 338905 The Crossroads of Corruption and Terrorism in the Global South
Authors: Stephen M. Magu
Abstract:
The 9/11 and Christmas bombing attacks in the United States are mostly associated with the inability of intelligence agencies to connect dots based on intelligence that was already available. The 1998, 2002, 2013 and several 2014 terrorist attacks in Kenya, on the other hand, are probably driven by a completely different dynamic: the invisible hand of corruption. The World Bank and Transparency International annually compute the Worldwide Governance Indicators and the Corruption Perception Index respectively. What perhaps is not adequately captured in the corruption metrics is the impact of corruption on terrorism. The World Bank data includes variables such as the control of corruption, (estimates of) government effectiveness, political stability and absence of violence/terrorism, regulatory quality, rule of law and voice and accountability. TI's CPI does not include measures related to terrorism, but it is plausible that there is an expectation of some terrorism impact arising from corruption. This paper, by examining the incidence, frequency and total number of terrorist attacks that have occurred especially since 1990, and further examining the specific cases of Kenya and Nigeria, argues that in addition to having major effects on governance, corruption has an even more frightening impact: that of facilitating and/or violating security mechanisms to the extent that foreign nationals can easily obtain identification that enables them to perpetuate major events, targeting powerful countries' interests in countries with weak corruption-fighting mechanisms. The paper aims to model interactions that demonstrate the cost/benefit analysis and agents' rational calculations as being non-rational calculations, given the ultimate impact. It argues that eradication of corruption is not just a matter of a better business environment, but that it is implicit in national security, and that for anti-corruption crusaders, this is an argument more potent than the economic cost / cost of doing business argument.Keywords: corruption, global south, identification, passports, terrorism
Procedia PDF Downloads 422904 The Effect of Carbon Nanotubes in Copolyamide Nonwovens on the Properties of CFRP Laminates
Authors: Kamil Dydek, Anna Boczkowska, Paulina Latko-Duralek, Rafal Kozera, Michal Salacinski
Abstract:
In recent years there has been increasing interest in many industries, such as the aviation, automotive, and military industries, in Carbon Fibre Reinforced Polymers (CFRP). This is because of the excellent properties of CFRP, which are characterized by very high strength and stiffness in relation to their mass, low density (almost twice as low as aluminum and more than five times as low as steel), and corrosion resistance. However, they do not have sufficient electrical conductivity, which is required in some applications. Therefore, work is underway to improve their electrical conductivity, for example, by incorporating carbon nanotubes (CNTs) into the CFRP structure. CNTs possess excellent properties, such as high electrical conductivity, high aspect ratio, high Young’s modulus, and high tensile strength. An idea developed by our team is a modification of CFRP by the use of thermoplastic nonwovens containing CNTs. Nanocomposite fibers were made from three different masterbatches differing in the content of multi-wall carbon nanotubes, and then nonwovens that differed in areal weight were produced using a thermo-press. The out of autoclave method was used to fabricate the laminates from commercial carbon-epoxy prepreg dedicated to aviation applications - one without the nonwovens (reference) and five containing nonwovens placed between each prepreg layer. The volume of electrical conductivity of the manufactured laminates was measured in three directions. In order to investigate the adhesion between carbon fibers and nonwovens, the microstructure of the produced laminates was observed. The mechanical properties of the CFRP composites were measured in a short-beam shear test. In addition, the influence of thermoplastic nonwovens on the thermos-mechanical properties of laminates was analyzed by Dynamic Mechanical Analysis. The studies were carried out within grant no. DOB-1-3/1/PS/2014 financed by the National Centre for Research and Development in Poland.Keywords: CFRP, thermoplastic nonwovens, carbon nanotubes, electrical conductivity
Procedia PDF Downloads 134903 The Early Pleistocene Mustelidae and Hyaena Record of the Yuanmou Basin
Authors: Arya Farjand
Abstract:
This study delves into the Early Pleistocene fauna of the Yuanmou Basin, highlighting two significant findings. The first is the discovery of exceptionally well-preserved canid coprolites, which provide a rare glimpse into the diet and ecological niche of these ancient carnivores. The analysis of these coprolites has revealed a diet rich in diverse prey species, suggesting a complex food web and a dynamic ecological environment. This discovery not only sheds light on the dietary habits of these canids but also offers broader insights into the region's ecological dynamics during the Early Pleistocene. Additionally, the preservation of these coprolites allows for detailed study of the carnivore's role in the ecosystem, including their interactions with other species and the overall health of the environment. The second major finding is the identification of a mustelid species, Eirictis yuanmouensis, from the same fossil horizon as the coprolites. This discovery is crucial for understanding the diversity and evolution of Mustelidae in the region. The detailed analysis of cranial and dental morphology of Eirictis yuanmouensis indicates unique adaptations that suggest a specialized ecological niche. This finding, in conjunction with the coprolite analysis, provides a comprehensive view of the ecological niches occupied by both mustelids and hyenas, enhancing our understanding of their adaptations and interactions within this paleoenvironment. The study's significance is further amplified by the analysis of pollen data from the same horizon, which indicates a paleoenvironment characterized by rapid climatic changes and a dominant semiarid climate. This combination of faunal and floral data paints a detailed picture of the Early Pleistocene environment in the Yuanmou Basin, offering valuable insights into the interactions between different carnivore species and their adaptation strategies in response to changing environmental conditions.Keywords: Yuanmou Basin, coprolite, Hyaena, eirictis yuanmouensis, early pleistocene
Procedia PDF Downloads 33902 Impact of Geomagnetic Variation over Sub-Auroral Ionospheric Region during High Solar Activity Year 2014
Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini
Abstract:
The present work is an attempt to evaluate the sub-auroral ionospheric behavior under changing space weather conditions especially during high solar activity year 2014. In view of this, the GPS TEC along with Ionosonde data over Indian permanent scientific base 'Maitri', Antarctica (70°46′00″ S, 11°43′56″ E) has been utilized. The results suggested that the nature of ionospheric responses to the geomagnetic disturbances mainly depended upon the status of high latitudinal electro-dynamic processes along with the season of occurrence. Fortunately, in this study, both negative and positive ionospheric impact to the geomagnetic disturbances has been observed in a single year but in different seasons. The study reveals that the combination of equator-ward plasma transportation along with ionospheric compositional changes causes a negative ionospheric impact during summer and equinox seasons. However, the combination of pole-ward contraction of the oval region along with particle precipitation may lead to exhibiting positive ionospheric response during the winter season. Other than this, some Ionosonde based new experimental evidence also provided clear evidence of particle precipitation deep up to the low altitudinal ionospheric heights, i.e., up to E-layer by the sudden and strong appearance of E-layer at 100 km altitudes. The sudden appearance of E-layer along with a decrease in F-layer electron density suggested the dominance of NO⁺ over O⁺ at a considered region under geomagnetic disturbed condition. The strengthening of E-layer is responsible for modification of auroral electrojet and field-aligned current system. The present study provided a good scientific insight on sub-auroral ionospheric to the changing space weather condition.Keywords: high latitude ionosphere, space weather, geomagnetic storms, sub-storm
Procedia PDF Downloads 169901 Overcoming the Challenges of Subjective Truths in the Post-Truth Age Through a CriticalEthical English Pedagogy
Authors: Farah Vierra
Abstract:
Following the 2016 US presidential election and the advancement of the Brexit referendum, the concept of “post-truth”, defined by Oxford Dictionary as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, came into prominent use in public, political and educational circles. What this essentially entails is that in this age, individuals are increasingly confronted with subjective perpetuations of truth in their discourse spheres that are informed by beliefs and opinions as opposed to any form of coherence to the reality of those who these truth claims concern. In principle, a subjective delineation of truth is progressive and liberating – especially considering its potential in providing marginalised groups in the diverse communities of our globalised world with the voice to articulate truths that are representative of themselves and their experiences. However, any form of human flourishing that seems to be promised here collapses as the tenets of subjective truths initially in place to liberate has been distorted through post-truth to allow individuals to purport selective and individualistic truth claims that further oppress and silence certain groups within society without due accountability. The evidence of which is prevalent through the conception of terms such as "alternative facts" and "fake news" that we observe individuals declare when their problematic truth claims are questioned. Considering the pervasiveness of post-truth and the ethical issues that accompany it, educators and scholars alike have increasingly noted the need to adapt educational practices and pedagogies to account for the diminishing objectivity of truth in the twenty-first century, especially because students, as digital natives, find themselves in the firing line of post-truth; engulfed in digital societies that proliferate post-truth through the surge of truth claims allowed in various media sites. In an attempt to equip students with the vital skills to navigate the post-truth age and oppose its proliferation of social injustices, English educators find themselves having to devise instructional strategies that not only teach students the ways they can critically and ethically scrutinise truth claims but also teach them to mediate the subjectivity of truth in a manner that does not undermine the voices of diverse communities. In hopes of providing educators with the roadmap to do so, this paper will first examine the challenges that confront students as a result of post-truth. Following which, the paper will elucidate the role English education can play in helping students overcome the complex ramifications of post-truth. Scholars have consistently touted the affordances of literary texts in providing students with imagined spaces to explore societal issues through a critical discernment of language and an ethical engagement with its narrative developments. Therefore, this paper will explain and demonstrate how literary texts, when used alongside a critical-ethical post-truth pedagogy that equips students with interpretive strategies informed by literary traditions such as literary and ethical criticism, can be effective in helping students develop the pertinent skills to comprehensively examine truth claims and overcome the challenges of the post-truth age.Keywords: post-truth, pedagogy, ethics, English, education
Procedia PDF Downloads 71900 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology
Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum
Abstract:
According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics
Procedia PDF Downloads 327899 Design of a Surveillance Drone with Computer Aided Durability
Authors: Maram Shahad Dana Anfal
Abstract:
This research paper presents the design of a surveillance drone with computer-aided durability and model analyses that provides a cost-effective and efficient solution for various applications. The quadcopter's design is based on a lightweight and strong structure made of materials such as aluminum and titanium, which provide a durable structure for the quadcopter. The structure of this product and the computer-aided durability system are both designed to ensure frequent repairs or replacements, which will save time and money in the long run. Moreover, the study discusses the drone's ability to track, investigate, and deliver objects more quickly than traditional methods, makes it a highly efficient and cost-effective technology. In this paper, a comprehensive analysis of the quadcopter's operation dynamics and limitations is presented. In both simulation and experimental data, the computer-aided durability system and the drone's design demonstrate their effectiveness, highlighting the potential for a variety of applications, such as search and rescue missions, infrastructure monitoring, and agricultural operations. Also, the findings provide insights into possible areas for improvement in the design and operation of the drone. Ultimately, this paper presents a reliable and cost-effective solution for surveillance applications by designing a drone with computer-aided durability and modeling. With its potential to save time and money, increase reliability, and enhance safety, it is a promising technology for the future of surveillance drones. operation dynamic equations have been evaluated successfully for different flight conditions of a quadcopter. Also, CAE modeling techniques have been applied for the modal risk assessment at operating conditions.Stress analysis have been performed under the loadings of the worst-case combined motion flight conditions.Keywords: drone, material, solidwork, hypermesh
Procedia PDF Downloads 144898 Studies on Climatic and Soil Site Suitability of Major Grapes-Growing Soils of Eastern and Southern Dry Zones of Karnataka
Authors: Harsha B. R., Anil Kumar K. S.
Abstract:
Climate and soils are the two most dynamic entities among the factors affecting growth and grapes productivity. Studying of prevailing climate over the years in a region provides sufficient information related to management practices to be carried out in vineyards. Evaluating the suitability of vineyard soils under different climatic conditions serves as the yardstick to analyse the performance of grapevines. This study was formulated to study the climate and evaluate the site-suitability of soils in vineyards of southern Karnataka, which has registered its superiority in the quality production of wine. Ten soil profiles were excavated for suitability evaluation of soils, and six taluks were studied for climatic analysis. In almost all the regions studied, recharge starts at the end of the May or June months, peaking in either September or October months. Soil Starts drying from mid of December months in the taluks studied. Bangalore North (Rajanukunte) soils were highly suited for grapes cultivation with no or slight limitations. Bangalore North (GKVK Farm) was moderately suited with slight to moderate limitations of slope and available nitrogen content. Moderate suitability was observed in the rest of the profiles studied in Eastern dry zone soils with the slight to moderate limitations of either organic carbon or available nitrogen or both in the Eastern dry zone. Magadi (Southern dry zone) soils were moderately suitable with slight to moderate limitations of graveliness, available nitrogen, organic carbon, and exchangeable sodium percentage. Sustainable performance of vineyards in terms of yield can be achieved in these taluks by managing the constraints existing in soils.Keywords: climatic analysis, dry zone, water recharge, growing period, suitability, sustainability
Procedia PDF Downloads 124897 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications
Authors: Arijit Saha, Hassan Kassem, Leo Hoening
Abstract:
Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb
Procedia PDF Downloads 96896 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters
Procedia PDF Downloads 197895 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level
Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown
Abstract:
‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.Keywords: data integration, data linkage, health planning, health services research
Procedia PDF Downloads 216894 Ensuring Quality in DevOps Culture
Authors: Sagar Jitendra Mahendrakar
Abstract:
Integrating quality assurance (QA) practices into DevOps culture has become increasingly important in modern software development environments. Collaboration, automation and continuous feedback characterize the seamless integration of DevOps development and operations teams to achieve rapid and reliable software delivery. In this context, quality assurance plays a key role in ensuring that software products meet the highest quality, performance and reliability standards throughout the development life cycle. This brief explores key principles, challenges, and best practices related to quality assurance in a DevOps culture. This emphasizes the importance of quality transfer in the development process, as quality control processes are integrated in every step of the DevOps process. Automation is the cornerstone of DevOps quality assurance, enabling continuous testing, integration and deployment and providing rapid feedback for early problem identification and resolution. In addition, the summary addresses the cultural and organizational challenges of implementing quality assurance in DevOps, emphasizing the need to foster collaboration, break down silos, and promote a culture of continuous improvement. It also discusses the importance of toolchain integration and capability development to support effective QA practices in DevOps environments. Moreover, the abstract discusses the cultural and organizational challenges in implementing QA within DevOps, emphasizing the need for fostering collaboration, breaking down silos, and nurturing a culture of continuous improvement. It also addresses the importance of toolchain integration and skills development to support effective QA practices within DevOps environments. Overall, this collection works at the intersection of QA and DevOps culture, providing insights into how organizations can use DevOps principles to improve software quality, accelerate delivery, and meet the changing demands of today's dynamic software. landscape.Keywords: quality engineer, devops, automation, tool
Procedia PDF Downloads 58893 Control of a Quadcopter Using Genetic Algorithm Methods
Authors: Mostafa Mjahed
Abstract:
This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system
Procedia PDF Downloads 431892 Creep Behaviour of Asphalt Modified by Waste Polystyrene and Its Hybrids with Crumb Rubber and Low-Density Polyethylene
Authors: Soheil Heydari, Ailar Hajimohammadi, Nasser Khalili
Abstract:
Polystyrene, being made from a monomer called styrene, is a rigid and easy-to mould polymer that is widely used for many applications, from foam packaging to disposable containers. Considering that the degradation of waste polystyrene takes up to 500 years, there is an urgent need for a sustainable application for waste polystyrene. This study evaluates the application of waste polystyrene as an asphalt modifier. The inclusion of waste plastics in asphalt is either practised by the dry process or the wet process. In the dry process, plastics are added straight into the asphalt mixture and in the wet process, they are mixed and digested into bitumen. In this article, polystyrene was used as an asphalt modifier in a dry process. However, the mixing process is precisely designed to make sure that the polymer is melted and modified in the binder. It was expected that, due to the rigidity of polystyrene, it will have positive effects on the permanent deformation of the asphalt mixture. Therefore, different mixtures were manufactured with different contents of polystyrene and Marshall specimens were manufactured, and dynamic creep tests were conducted to evaluate the permanent deformation of the modification. This is a commonly repeated loading test conducted at different stress levels and temperatures. Loading cycles are applied to the AC specimen until failure occurs; with the amount of deformation constantly recorded the cumulative, permanent strain is determined and reported as a function of the number of cycles. Also, to our best knowledge, hybrid mixes of polystyrene with crumb rubber and low-density polyethylene were made and compared with a polystyrene-modified mixture. The test results of this study showed that the hybrid mix of polystyrene and low-density polyethylene has the highest resistance against permanent deformation. However, the polystyrene-modified mixture outperformed the hybrid mix of polystyrene and crumb rubber, and both demonstrated way lower permanent deformation than the unmodified specimen.Keywords: permanent deformation, waste plastics, polystyrene, hybrid plastics, hybrid mix, hybrid modification, dry process
Procedia PDF Downloads 106891 Tax System Reform in Nepal: Analysis of Contemporary Issues, Challenges, and Ways Forward
Authors: Dilliram Paudyal
Abstract:
The history of taxation in Nepal dates back to antiquity. However, the modern tax system gained its momentum after the establishment of democracy in 1951, which initially focused only land tax and tariff on foreign trade. In the due time, several taxes were introduced, such as direct taxes, indirect taxes, and non-taxes. However, the tax structure in Nepal is heavily dominated by indirect taxes that contribute more than 60 % of the total revenue. The government has been mobilizing revenues through a series of tax reforms during the Tenth Five-year Plan (2002 – 2007) and successive Three-year Interim Development Plans by introducing several tax measures. However, these reforms are regressive in nature, which does not lead the overall economy towards short-run stability as well as in the long run development. Based on the literature review and discussion among government officials and few taxpayers individually and groups, this paper aims to major issues and challenges that hinder the tax reform effective in Nepal. Additionally, this paper identifies potential way and process of tax reform in Nepal. The results of the study indicate that transparency in a major problem in Nepalese tax system in Nepal, where serious structural constraints with administrative and procedural complexities envisaged in the Income Tax Act and taxpayers are often unaware of the specific size of tax which is to comply them. Some other issues include high tax rate, limited tax base, leakages in tax collection, rigid and complex Income Tax Act, inefficient and corrupt tax administration, limited potentialities of direct taxes and negative responsiveness of land tax with higher administrative costs. In the context, modality of tax structure and mobilize additional resources is to be rectified on a greater quantum by establishing an effective, dynamic and highly power driven Autonomous Revenue Board.Keywords: corrupt, development, inefficient, taxation
Procedia PDF Downloads 179