Search results for: deep space navigation
4052 Analyzing the Ancient Islamic Architectural Theories: Role of Geometric Proportionality as a Principle of Islamic Design
Authors: Vamsi G.
Abstract:
Majority of the modern-day structures have less aesthetical value with minimum requirements set by foreign tribes. Numerous elements of traditional architecture can be incorporated into modern designs using appropriate principles to improve and enhance the functionality, aesthetics, and usability of any space. This paper reviews the diminishing ancient values of the traditional Islamic architecture. By introducing them into the modern-day structures like commercial, residential and recreational spaces in at least the Islamic states, the functionality of those spaces can be improved. For this, aspects like space planning, aesthetics, scale, hierarchy, value, and patterns are to be experimented with modern day structures. Case studies of few ancient Islamic architectural marvels are done to elaborate the whole. A brief analysis of materials and execution strategies are also a part of this paper. The analysis is formulated and is ready to design or redesign spaces using traditional Islamic principles and Elements of design to improve the quality of the architecture of modern day structures by studying the ancient Islamic architectural theories. For this, sources from the history and evolution of this architecture have been studied. Also, elements and principles of design from case studies of various mosques, forts, tombs, and palaces have been tabulated. All this data accumulated, will help revive the elements decorated by ancient principles in functional and aesthetical ways. By this, one of the most astonishing architectural styles can be conserved, reinstalled into modern day buildings and remembered.Keywords: ancient architecture, architectural history, Islamic architecture, principles and elements
Procedia PDF Downloads 2134051 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space
Authors: V. Jayanthi
Abstract:
Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids
Procedia PDF Downloads 2824050 Emotion Recognition in Video and Images in the Wild
Authors: Faizan Tariq, Moayid Ali Zaidi
Abstract:
Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction.Keywords: face recognition, emotion recognition, deep learning, CNN
Procedia PDF Downloads 1874049 Subpixel Corner Detection for Monocular Camera Linear Model Research
Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao
Abstract:
Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection
Procedia PDF Downloads 2774048 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment
Authors: Pranjal Srivastava, Piyali Sengupta
Abstract:
The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.Keywords: drilling riser, marine, analytical model, fragility
Procedia PDF Downloads 1474047 Control Power in Doubly Fed Induction Generator Wind Turbine with SVM Control Inverter
Authors: Zerzouri Nora, Benalia Nadia, Bensiali Nadia
Abstract:
This paper presents a grid-connected wind power generation scheme using Doubly Fed Induction Generator (DFIG). This can supply power at constant voltage and constant frequency with the rotor speed varying. This makes it suitable for variable speed wind energy application. The DFIG system consists of wind turbine, asynchronous wound rotor induction generator, and inverter with Space Vector Modulation (SVM) controller. In which the stator is connected directly to the grid and the rotor winding is in interface with rotor converter and grid converter. The use of back-to-back SVM converter in the rotor circuit results in low distortion current, reactive power control and operate at variable speed. Mathematical modeling of the DFIG is done in order to analyze the performance of the systems and they are simulated using MATLAB. The simulation results for the system are obtained and hence it shows that the system can operate at variable speed with low harmonic current distortion. The objective is to track and extract maximum power from the wind energy system and transfer it to the grid for useful work.Keywords: Doubly Fed Induction Generator, Wind Energy Conversion Systems, Space Vector Modulation, distortion harmonics
Procedia PDF Downloads 4844046 Inequality for Doubly Warped Product Manifolds
Authors: Morteza Faghfouri
Abstract:
In this paper we establish a general inequality involving the Laplacian of the warping functions and the squared mean curvature of any doubly warped product isometrically immersed in a Riemannian manifold.Keywords: integral submanifolds, S-space forms, doubly warped product, inequality
Procedia PDF Downloads 2884045 Correlation Between Forbush-Decrease Amplitude Detected by Mountain Chacaltaya Neutron Monitor and Solar Wind Electric Filed
Authors: Sebwato Nasurudiin, Akimasa Yoshikawa, Ahmed Elsaid, Ayman Mahrous
Abstract:
This study examines the correlation between the amplitude of Forbush Decreases (FDs) detected by the Mountain Chacaltaya neutron monitor and the solar wind electric field (E). Forbush Decreases, characterized by sudden drops in cosmic ray intensity, are typically associated with interplanetary coronal mass ejections (ICMEs) and high-speed solar wind streams. The Mountain Chacaltaya neutron monitor, located at a high altitude in Bolivia, offers an optimal setting for observing cosmic ray variations. The solar wind electric field, influenced by the solar wind velocity and interplanetary magnetic field, significantly impacts cosmic ray transport in the heliosphere. By analyzing neutron monitor data alongside solar wind parameters, we found a high correlation between E and FD amplitudes with a correlation factor of nearly 87%. The findings enhance our understanding of space weather processes, cosmic ray modulation, and solar-terrestrial interactions, providing valuable insights for predicting space weather events and mitigating their technological impacts. This study contributes to the broader astrophysics field by offering empirical data on cosmic ray modulation mechanisms.Keywords: cosmic rays, Forbush decrease, solar wind, neutron monitor
Procedia PDF Downloads 464044 The Explanation for Dark Matter and Dark Energy
Authors: Richard Lewis
Abstract:
The following assumptions of the Big Bang theory are challenged and found to be false: the cosmological principle, the assumption that all matter formed at the same time and the assumption regarding the cause of the cosmic microwave background radiation. The evolution of the universe is described based on the conclusion that the universe is finite with a space boundary. This conclusion is reached by ruling out the possibility of an infinite universe or a universe which is finite with no boundary. In a finite universe, the centre of the universe can be located with reference to our home galaxy (The Milky Way) using the speed relative to the Cosmic Microwave Background (CMB) rest frame and Hubble's law. This places our home galaxy at a distance of approximately 26 million light years from the centre of the universe. Because we are making observations from a point relatively close to the centre of the universe, the universe appears to be isotropic and homogeneous but this is not the case. The CMB is coming from a source located within the event horizon of the universe. There is sufficient mass in the universe to create an event horizon at the Schwarzschild radius. Galaxies form over time due to the energy released by the expansion of space. Conservation of energy must consider total energy which is mass (+ve) plus energy (+ve) plus spacetime curvature (-ve) so that the total energy of the universe is always zero. The predominant position of galaxy formation moves over time from the centre of the universe towards the boundary so that today the majority of new galaxy formation is taking place beyond our horizon of observation at 14 billion light years.Keywords: cosmology, dark energy, dark matter, evolution of the universe
Procedia PDF Downloads 1414043 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 1254042 Study on the Factors Influencing the Built Environment of Residential Areas on the Lifestyle Walking Trips of the Elderly
Authors: Daming Xu, Yuanyuan Wang
Abstract:
Abstract: Under the trend of rapid expansion of urbanization, the motorized urban characteristics become more and more obvious, and the walkability of urban space is seriously affected. The construction of walkability of space, as the main mode of travel for the elderly in their daily lives, has become more and more important in the current social context of serious aging. Settlement is the most basic living unit of residents, and daily shopping, medical care, and other daily trips are closely related to the daily life of the elderly. Therefore, it is of great practical significance to explore the impact of built environment on elderly people's daily walking trips at the settlement level for the construction of pedestrian-friendly settlements for the elderly. The study takes three typical settlements in Harbin Daoli District in three different periods as examples and obtains data on elderly people's walking trips and built environment characteristics through field research, questionnaire distribution, and internet data acquisition. Finally, correlation analysis and multinomial logistic regression model were applied to analyze the influence mechanism of built environment on elderly people's walkability based on the control of personal attribute variables in order to provide reference and guidance for the construction of walkability for elderly people in built environment in the future.Keywords: built environment, elderly, walkability, multinomial logistic regression model
Procedia PDF Downloads 764041 Thermodynamic Performance of a Low-Cost House Coated with Transparent Infrared Reflective Paint
Authors: Ochuko K. Overen, Edson L. Meyer
Abstract:
Uncontrolled heat transfer between the inner and outer space of low-cost housings through the thermal envelope result in indoor thermal discomfort. As a result, an excessive amount of energy is consumed for space heating and cooling. Thermo-optical properties are the ability of paints to reduce the rate of heat transfer through the thermal envelope. The aim of this study is to analyze the thermal performance of a low-cost house with its walls inner surface coated with transparent infrared reflective paint. The thermo-optical properties of the paint were analyzed using Scanning Electron Microscopy/ Energy Dispersive X-ray spectroscopy (SEM/EDX), Fourier Transform Infra-Red (FTIR) and thermal photographic technique. Meteorological indoor and ambient parameters such as; air temperature, relative humidity, solar radiation, wind speed and direction of a low-cost house in Golf-course settlement, South Africa were monitored. The monitoring period covers both winter and summer period before and after coating. The thermal performance of the coated walls was evaluated using time lag and decrement factor. The SEM image shows that the coat is transparent to light. The presence of Al as Al2O and other elements were revealed by the EDX spectrum. Before coating, the average decrement factor of the walls in summer was found to be 0.773 with a corresponding time lag of 1.3 hours. In winter, the average decrement factor and corresponding time lag were 0.467 and 1.6 hours, respectively. After coating, the average decrement factor and corresponding time lag were 0.533 and 2.3 hour, respectively in summer. In winter, an average decrement factor of 1.120 and corresponding time lag of 3 hours was observed. The findings show that the performance of the coats is influenced by the seasons. With a 74% reduction in decrement factor and 1.4 time lag increase in winter, it implies that the coatings have more ability to retain heat within the inner space of the house than preventing heat flow into the house. In conclusion, the results have shown that transparent infrared reflective paint has the ability to reduce the propagation of heat flux through building walls. Hence, it can serve as a remedy to the poor thermal performance of low-cost housings in South Africa.Keywords: energy efficiency, decrement factor, low-cost housing, paints, rural development, thermal comfort, time lag
Procedia PDF Downloads 2844040 Self-Calibration of Fish-Eye Camera for Advanced Driver Assistance Systems
Authors: Atef Alaaeddine Sarraj, Brendan Jackman, Frank Walsh
Abstract:
Tomorrow’s car will be more automated and increasingly connected. Innovative and intuitive interfaces are essential to accompany this functional enrichment. For that, today the automotive companies are competing to offer an advanced driver assistance system (ADAS) which will be able to provide enhanced navigation, collision avoidance, intersection support and lane keeping. These vision-based functions require an accurately calibrated camera. To achieve such differentiation in ADAS requires sophisticated sensors and efficient algorithms. This paper explores the different calibration methods applicable to vehicle-mounted fish-eye cameras with arbitrary fields of view and defines the first steps towards a self-calibration method that adequately addresses ADAS requirements. In particular, we present a self-calibration method after comparing different camera calibration algorithms in the context of ADAS requirements. Our method gathers data from unknown scenes while the car is moving, estimates the camera intrinsic and extrinsic parameters and corrects the wide-angle distortion. Our solution enables continuous and real-time detection of objects, pedestrians, road markings and other cars. In contrast, other camera calibration algorithms for ADAS need pre-calibration, while the presented method calibrates the camera without prior knowledge of the scene and in real-time.Keywords: advanced driver assistance system (ADAS), fish-eye, real-time, self-calibration
Procedia PDF Downloads 2524039 Material Supply Mechanisms for Contemporary Assembly Systems
Authors: Rajiv Kumar Srivastava
Abstract:
Manufacturing of complex products such as automobiles and computers requires a very large number of parts and sub-assemblies. The design of mechanisms for delivery of these materials to the point of assembly is an important manufacturing system and supply chain challenge. Different approaches to this problem have been evolved for assembly lines designed to make large volumes of standardized products. However, contemporary assembly systems are required to concurrently produce a variety of products using approaches such as mixed model production, and at times even mass customization. In this paper we examine the material supply approaches for variety production in moderate to large volumes. The conventional approach for material delivery to high volume assembly lines is to supply and stock materials line-side. However for certain materials, especially when the same or similar items are used along the line, it is more convenient to supply materials in kits. Kitting becomes more preferable when lines concurrently produce multiple products in mixed model mode, since space requirements could increase as product/ part variety increases. At times such kits may travel along with the product, while in some situations it may be better to have delivery and station-specific kits rather than product-based kits. Further, in some mass customization situations it may even be better to have a single delivery and assembly station, to which an entire kit is delivered for fitment, rather than a normal assembly line. Finally, in low-moderate volume assembly such as in engineered machinery, it may be logistically more economical to gather materials in an order-specific kit prior to launching final assembly. We have studied material supply mechanisms to support assembly systems as observed in case studies of firms with different combinations of volume and variety/ customization. It is found that the appropriate approach tends to be a hybrid between direct line supply and different kitting modes, with the best mix being a function of the manufacturing and supply chain environment, as well as space and handling considerations. In our continuing work we are studying these scenarios further, through the use of descriptive models and progressing towards prescriptive models to help achieve the optimal approach, capturing the trade-offs between inventory, material handling, space, and efficient line supply.Keywords: assembly systems, kitting, material supply, variety production
Procedia PDF Downloads 2264038 Formulation of Aggregates Based on Dredged Sand and Sediments
Authors: Nor-Edine Abriak, Ilyas Ennahal, Abdeljalil Zri, Mahfoud Benzerzour
Abstract:
Nord Pas de Calais is one of the French regions that records a large volume of dredged sediment in harbors and waterways. To ensure navigation within ports and waterways, harbor and river managers are forced to find solutions to remove sediment that contamination levels exceed levels established by regulations. Therefore, this non- submersible sediment must be managed on land and will be subject to the waste regulation. In this paper, some examples of concrete achievements and experiments of reusing dredged sediment in civil engineering and sector will be illustrated. These achievements are alternative solutions to sediment landfilling and guarantee the reuse of this material in a logic of circular economy and ecological transition. It permits to preserve the natural resources increasingly scarce and resolve issues related to the accumulation of sediments in the harbor basins, rivers, dams, and lakes, etc. Examples of beneficial use of dredged material illustrated in this paper are the result of different projects reusing harbor and waterways sediments in several applications. These projects were funded under the national SEDIMATERIAUX approach. Thus the technical and environmental feasibility of the reuse of dredged sediment is demonstrated and verified; the dredged sediment reusing would meet multiple challenges of sustainable development in relation to environmental, economic, social and societal.Keywords: circular economy, sediment, SEDIMATERIAUX, waterways
Procedia PDF Downloads 1564037 Research on the Overall Protection of Historical Cities Based on the 'City Image' in Ancient Maps: Take the Ancient City of Shipu, Zhejiang, China as an Example
Authors: Xiaoya Yi, Yi He, Zhao Lu, Yang Zhang
Abstract:
In the process of rapid urbanization, many historical cities have undergone excessive demolition and construction under the protection and renewal mechanism. The original pattern of the city has been changed, the urban context has been cut off, and historical features have gradually been lost. The historical city gradually changed into the form of decentralization and fragmentation. The understanding of the ancient city includes two levels. The first one refers to the ancient city on the physical space, which defined an ancient city by its historic walls. The second refers to the public perception of the image, which is derived from people's spatial identification of the ancient city. In ancient China, people draw maps to show their way of understanding the city. Starting from ancient maps and exploring the spatial characteristics of traditional Chinese cities from the perspective of urban imagery is a key clue to understanding the spatial characteristics of historical cities on an overall level. The spatial characteristics of the urban image presented by the ancient map are summarized into two levels by typology. The first is the spatial pattern composed of the center, axis and boundary. The second is the space element that contains the city, street, and sign system. Taking the ancient city of Shipu as a typical case, the "city image" in the ancient map is analyzed as a prototype, and it is projected into the current urban space. The research found that after a long period of evolution, the historical spatial pattern of the ancient city has changed from “dominant” to “recessive control”, and the historical spatial elements are non-centralized and fragmented. The wall that serves as the boundary of the ancient city is transformed into “fragmentary remains”, the streets and lanes that serve as the axis of the ancient city are transformed into “structural remains”, and the symbols of the ancient city center are transformed into “site remains”. Based on this, the paper proposed the methods of controlling the protection of land boundaries, the protecting of the streets and lanes, and the selective restoring of the city wall system and the sign system by accurate assessment. In addition, this paper emphasizes the continuity of the ancient city's traditional spatial pattern and attempts to explore a holistic conservation method of the ancient city in the modern context.Keywords: ancient city protection, ancient maps, Shipu ancient city, urban intention
Procedia PDF Downloads 1284036 A Geographical Information System Supported Method for Determining Urban Transformation Areas in the Scope of Disaster Risks in Kocaeli
Authors: Tayfun Salihoğlu
Abstract:
Following the Law No: 6306 on Transformation of Disaster Risk Areas, urban transformation in Turkey found its legal basis. In the best practices all over the World, the urban transformation was shaped as part of comprehensive social programs through the discourses of renewing the economic, social and physical degraded parts of the city, producing spaces resistant to earthquakes and other possible disasters and creating a livable environment. In Turkish practice, a contradictory process is observed. In this study, it is aimed to develop a method for better understanding of the urban space in terms of disaster risks in order to constitute a basis for decisions in Kocaeli Urban Transformation Master Plan, which is being prepared by Kocaeli Metropolitan Municipality. The spatial unit used in the study is the 50x50 meter grids. In order to reflect the multidimensionality of urban transformation, three basic components that have spatial data in Kocaeli were identified. These components were named as 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings', and 'Inadequacy of Urban Services'. Each component was weighted and scored for each grid. In order to delimitate urban transformation zones Optimized Outlier Analysis (Local Moran I) in the ArcGIS 10.6.1 was conducted to test the type of distribution (clustered or scattered) and its significance on the grids by assuming the weighted total score of the grid as Input Features. As a result of this analysis, it was found that the weighted total scores were not significantly clustering at all grids in urban space. The grids which the input feature is clustered significantly were exported as the new database to use in further mappings. Total Score Map reflects the significant clusters in terms of weighted total scores of 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings' and 'Inadequacy of Urban Services'. Resulting grids with the highest scores are the most likely candidates for urban transformation in this citywide study. To categorize urban space in terms of urban transformation, Grouping Analysis in ArcGIS 10.6.1 was conducted to data that includes each component scores in significantly clustered grids. Due to Pseudo Statistics and Box Plots, 6 groups with the highest F stats were extracted. As a result of the mapping of the groups, it can be said that 6 groups can be interpreted in a more meaningful manner in relation to the urban space. The method presented in this study can be magnified due to the availability of more spatial data. By integrating with other data to be obtained during the planning process, this method can contribute to the continuation of research and decision-making processes of urban transformation master plans on a more consistent basis.Keywords: urban transformation, GIS, disaster risk assessment, Kocaeli
Procedia PDF Downloads 1204035 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 2184034 Analysis of the Black Sea Gas Hydrates
Authors: Sukru Merey, Caglar Sinayuc
Abstract:
Gas hydrate deposits which are found in deep ocean sediments and in permafrost regions are supposed to be a fossil fuel reserve for the future. The Black Sea is also considered rich in terms of gas hydrates. It abundantly contains gas hydrates as methane (CH4~80 to 99.9%) source. In this study, by using the literature, seismic and other data of the Black Sea such as salinity, porosity of the sediments, common gas type, temperature distribution and pressure gradient, the optimum gas production method for the Black Sea gas hydrates was selected as mainly depressurization method. Numerical simulations were run to analyze gas production from gas hydrate deposited in turbidites in the Black Sea by depressurization.Keywords: CH4 hydrate, Black Sea hydrates, gas hydrate experiments, HydrateResSim
Procedia PDF Downloads 6234033 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media
Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca
Abstract:
Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks
Procedia PDF Downloads 1974032 Research on Urban Design Method of Ancient City Guided by Catalyst Theory
Authors: Wang Zhiwei, Wang Weiwu
Abstract:
The process of urbanization in China has entered a critical period of transformation from urban expansion and construction to delicate urban design, thus forming a new direction in the field of urban design. So far, catalyst theory has become a prominent guiding strategy in urban planning and design. In this paper, under the background of urban renewal, catalyst theory is taken as the guiding ideology to explore the method of urban design in shouxian county. Firstly, this study briefly introduces and analyzes the catalyst theory. Through field investigation, it is found that the city has a large number of idle Spaces, such as abandoned factories and schools. In the design, the idle Spaces in the county town are utilized and interlinked in space, and functional interaction is carried out from the pattern of the county town. On the one hand, the results showed that the catalyst theory can enhance the vitality of the linear street space with a small amount of monomer construction. On the other hand, the city can also increase the cultural and economic sites of the city without damaging the historical relics and the sense of alterations of the ancient city, to improve the quality of life and quality of life of citizens. The city micro-transformation represented by catalyst theory can help ancient cities like shouxian to realize the activation of the old city and realize the gradual development.Keywords: catalytic theory, urban design, China's ancient city, Renaissance
Procedia PDF Downloads 1244031 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study
Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre
Abstract:
Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.
Procedia PDF Downloads 1114030 Towards Developing A Rural South African Child Into An Engineering Graduates With Conceptual And Critical Thinking Skills
Authors: Betty Kibirige
Abstract:
Students entering the University of Zululand (UNIZULU) Science Faculty mostly come with skills that allow them to prepare for exams and pass them in order to satisfy requirements for entry into a tertiary Institution. Some students hail from deep rural schools with limited facilities, while others come from well-resourced schools. Personal experience has shown that it may take a student the whole time at a tertiary institution following the same skills as those acquired in high school as a sure means of entering the next level in their development, namely a postgraduate program. While it is apparent that at this point in human history, it is totally impossible to teach all the possible content in any one subject, many academics approach teaching and learning from the traditional point of view. It therefore became apparent to explore ways of developing a graduate that will be able to approach life with skills that allows them to navigate knowledge by applying conceptual and critical thinking skills. Recently, the Science Faculty at the University of Zululand introduced two Engineering programs. In an endeavour to approach the development of the Engineering graduate in this institution to be able to tackle problem-solving in the present-day excessive information availability, it became necessary to study and review approaches used by various academics in order to settle for a possible best approach to the challenge at hand. This paper focuses on the development of a deep rural child in a graduate with conceptual and critical thinking skills as major attributes possessed upon graduation. For this purpose, various approaches were studied. A combination of these approaches was repackaged to form an approach that may appear novel to UNIZULU and the rural child, especially for the Engineering discipline. The approach was checked by offering quiz questions to students participating in an engineering module, observing test scores in the targeted module and make comparative studies. Test results are discussed in the article. It was concluded that students’ graduate attributes could be tailored subconsciously to indeed include conceptual and critical thinking skills, but through more than one approach depending mainly on the student's high school background.Keywords: graduate attributes, conceptual skills, critical thinking skills, traditional approach
Procedia PDF Downloads 2424029 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation
Procedia PDF Downloads 464028 Automated Testing to Detect Instance Data Loss in Android Applications
Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai
Abstract:
Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.Keywords: Android, automated testing, activity, data loss
Procedia PDF Downloads 2374027 Spin-Flip and Magnetoelectric Coupling in Acentric and Non-Polar Pb₂MnO₄
Authors: K. D. Chandrasekhar, H. C. Wu, D. J. Hsieh, B. J. Song, J. -Y. Lin, J. L. Her, L. Z. Deng, M. Gooch, C. W. Chu, H. D. Yang
Abstract:
Stress-mediated coupling of electrical and magnetic dipoles in a single phase multiferroic is rare. Pb₂MnO₄ belong to multi-piezo crystal class with the space group P⁻42₁Keywords: multiferroic, multipiezo, Pb₂MnO₄, spin-flip
Procedia PDF Downloads 2374026 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 1064025 Substation Automation, Digitization, Cyber Risk and Chain Risk Management Reliability
Authors: Serzhan Ashirov, Dana Nour, Rafat Rob, Khaled Alotaibi
Abstract:
There has been a fast growth in the introduction and use of communications, information, monitoring, and sensing technologies. The new technologies are making their way to the Industrial Control Systems as embedded in products, software applications, IT services, or commissioned to enable integration and automation of increasingly global supply chains. As a result, the lines that separated the physical, digital, and cyber world have diminished due to the vast implementation of the new, disruptive digital technologies. The variety and increased use of these technologies introduce many cybersecurity risks affecting cyber-resilience of the supply chain, both in terms of the product or service delivered to a customer and members of the supply chain operation. US department of energy considers supply chain in the IR4 space to be the weakest link in cybersecurity. The IR4 identified the digitization of the field devices, followed by digitalization that eventually moved through the digital transformation space with little care for the new introduced cybersecurity risks. This paper will examine the best methodologies for securing the electrical substations from cybersecurity attacks due to supply chain risks, and due to digitization effort. SCADA systems are the most vulnerable part of the power system infrastructure due to digitization and due to the weakness and vulnerabilities in the supply chain security. The paper will discuss in details how create a secure supply chain methodology, secure substations, and mitigate the risks due to digitizationKeywords: cybersecurity, supply chain methodology, secure substation, digitization
Procedia PDF Downloads 644024 Experimental Study on Different Load Operation and Rapid Load-change Characteristics of Pulverized Coal Combustion with Self-preheating Technology
Authors: Hongliang Ding, Ziqu Ouyang
Abstract:
Under the basic national conditions that the energy structure is dominated by coal, it is of great significance to realize deep and flexible peak shaving of boilers in pulverized coal power plants, and maximize the consumption of renewable energy in the power grid, to ensure China's energy security and scientifically achieve the goals of carbon peak and carbon neutrality. With the promising self-preheating combustion technology, which had the potential of broad-load regulation and rapid response to load changes, this study mainly investigated the different load operation and rapid load-change characteristics of pulverized coal combustion. Four effective load-stabilization bases were proposed according to preheating temperature, coal gas composition (calorific value), combustion temperature (spatial mean temperature and mean square temperature fluctuation coefficient), and flue gas emissions (CO and NOx concentrations), on the basis of which the load-change rates were calculated to assess the load response characteristics. Due to the improvement of the physicochemical properties of pulverized coal after preheating, stable ignition and combustion conditions could be obtained even at a low load of 25%, with a combustion efficiency of over 97.5%, and NOx emission reached the lowest at 50% load, with the concentration of 50.97 mg/Nm3 (@6%O2). Additionally, the load ramp-up stage displayed higher load-change rates than the load ramp-down stage, with maximum rates of 3.30 %/min and 3.01 %/min, respectively. Furthermore, the driving force formed by high step load was conducive to the increase of load-change rate. The rates based on the preheating indicator attained the highest value of 3.30 %/min, while the rates based on the combustion indicator peaked at 2.71 %/min. In comparison, the combustion indicator accurately described the system’s combustion state and load changes, whereas the preheating indicator was easier to acquire, with a higher load-change rate, hence the appropriate evaluation strategy should depend on the actual situation. This study verified a feasible method for deep and flexible peak shaving of coal-fired power units, further providing basic data and technical supports for future engineering applications.Keywords: clean coal combustion, load-change rate, peak shaving, self-preheating
Procedia PDF Downloads 684023 Predictive Analysis of the Stock Price Market Trends with Deep Learning
Authors: Suraj Mehrotra
Abstract:
The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.Keywords: machine learning, testing set, artificial intelligence, stock analysis
Procedia PDF Downloads 95