Search results for: metastases in lymph nodes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 754

Search results for: metastases in lymph nodes

34 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 231
33 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 240
32 The Spatial Circuit of the Audiovisual Industry in Argentina: From Monopoly and Geographic Concentration to New Regionalization and Democratization Policies

Authors: André Pasti

Abstract:

Historically, the communication sector in Argentina is characterized by intense monopolization and geographical concentration in the city of Buenos Aires. In 2000, the four major media conglomerates in operation – Clarín, Telefónica, America and Hadad – controlled 84% of the national media market. By 2009, new policies were implemented as a result of civil society organizations demands. Legally, a new regulatory framework was approved: the law 26,522 of Audiovisual Communications Services. Supposedly, these policies intend to create new conditions for the development of the audiovisual economy in the territory of Argentina. The regionalization of audiovisual production and the democratization of channels and access to media were among the priorities. This paper analyses the main changes and continuities in the organization of the spatial circuit of the audiovisual industry in Argentina provoked by these new policies. These new policies aim at increasing the diversity of audiovisual producers and promoting regional audiovisual industries. For this purpose, a national program for the development of audiovisual centers within the country was created. This program fostered a federalized production network, based on nine audiovisual regions and 40 nodes. Each node has created technical, financial and organizational conditions to gather different actors in audiovisual production – such as SMEs, social movements and local associations. The expansion of access to technical networks was also a concern of other policies, such as ‘Argentina connected’, whose objective was to expand access to broadband Internet. The Open Digital Television network also received considerable investments. Furthermore, measures have been carried out in order to impose limits on the concentration of ownership as well as to eliminate the oligopolies and to ensure more competition in the sector. These actions intended to force a divide of the media conglomerates into smaller groups. Nevertheless, the corporations that compose these conglomerates resist strongly, making full use of their economic and judiciary power. Indeed, the absence of effective impact of such measures can be testified by the fact that the audiovisual industry remains strongly concentrated in Argentina. Overall, these new policies were designed properly to decentralize audiovisual production and expand the regional diversity of the audiovisual industry. However, the effective transformation of the organization of the audiovisual circuit in the territory faced several resistances. This can be explained firstly and foremost by the ideological and economic power of the media conglomerates. In the second place, there is an inherited inertia from the unequal distribution of the objects needed for the audiovisual production and consumption. Lastly, the resistance also relies on financial needs and in the excessive dependence of the state for the promotion of regional audiovisual production.

Keywords: Argentina, audiovisual industry, communication policies, geographic concentration, regionalization, spatial circuit

Procedia PDF Downloads 216
31 Chemical Technology Approach for Obtaining Carbon Structures Containing Reinforced Ceramic Materials Based on Alumina

Authors: T. Kuchukhidze, N. Jalagonia, T. Archuadze, G. Bokuchava

Abstract:

The growing scientific-technological progress in modern civilization causes actuality of producing construction materials which can successfully work in conditions of high temperature, radiation, pressure, speed, and chemically aggressive environment. Such extreme conditions can withstand very few types of materials and among them, ceramic materials are in the first place. Corundum ceramics is the most useful material for creation of constructive nodes and products of various purposes for its low cost, easy accessibility to raw materials and good combination of physical-chemical properties. However, ceramic composite materials have one disadvantage; they are less plastics and have lower toughness. In order to increase the plasticity, the ceramics are reinforced by various dopants, that reduces the growth of the cracks. It is shown, that adding of even small amount of carbon fibers and carbon nanotubes (CNT) as reinforcing material significantly improves mechanical properties of the products, keeping at the same time advantages of alundum ceramics. Graphene in composite material acts in the same way as inorganic dopants (MgO, ZrO2, SiC and others) and performs the role of aluminum oxide inhibitor, as it creates shell, that gives possibility to reduce sintering temperature and at the same time it acts as damper, because scattering of a shock wave takes place on carbon structures. Application of different structural modification of carbon (graphene, nanotube and others) as reinforced material, gives possibility to create multi-purpose highly requested composite materials based on alundum ceramics. In the present work offers simplified technology for obtaining of aluminum oxide ceramics, reinforced with carbon nanostructures, during which chemical modification with doping carbon nanostructures will be implemented in the process of synthesis of final powdery composite – Alumina. In charge doping carbon nanostructures connected to matrix substance with C-O-Al bonds, that provide their homogeneous spatial distribution. In ceramic obtained as a result of consolidation of such powders carbon fragments equally distributed in the entire matrix of aluminum oxide, that cause increase of bending strength and crack-resistance. The proposed way to prepare the charge simplifies the technological process, decreases energy consumption, synthesis duration and therefore requires less financial expenses. In the implementation of this work, modern instrumental methods were used: electronic and optical microscopy, X-ray structural and granulometric analysis, UV, IR, and Raman spectroscopy.

Keywords: ceramic materials, α-Al₂O₃, carbon nanostructures, composites, characterization, hot-pressing

Procedia PDF Downloads 119
30 Spatial Heterogeneity of Urban Land Use in the Yangtze River Economic Belt Based on DMSP/OLS Data

Authors: Liang Zhou, Qinke Sun

Abstract:

Taking the Yangtze River Economic Belt as an example, using long-term nighttime lighting data from DMSP/OLS from 1992 to 2012, support vector machine classification (SVM) was used to quantitatively extract urban built-up areas of economic belts, and spatial analysis of expansion intensity index, standard deviation ellipse, etc. was introduced. The model conducts detailed and in-depth discussions on the strength, direction, and type of the expansion of the middle and lower reaches of the economic belt and the key node cities. The results show that: (1) From 1992 to 2012, the built-up areas of the major cities in the Yangtze River Valley showed a rapid expansion trend. The built-up area expanded by 60,392 km², and the average annual expansion rate was 31%, that is, from 9615 km² in 1992 to 70007 km² in 2012. The spatial gradient analysis of the watershed shows that the expansion of urban built-up areas in the middle and lower reaches of the river basin takes Shanghai as the leading force, and the 'bottom-up' model shows an expanding pattern of 'upstream-downstream-middle-range' declines. The average annual rate of expansion is 36% and 35%, respectively. 17% of which the midstream expansion rate is about 50% of the upstream and downstream. (2) The analysis of expansion intensity shows that the urban expansion intensity in the Yangtze River Basin has generally shown an upward trend, the downstream region has continued to rise, and the upper and middle reaches have experienced different amplitude fluctuations. To further analyze the strength of urban expansion at key nodes, Chengdu, Chongqing, and Wuhan in the upper and middle reaches maintain a high degree of consistency with the intensity of regional expansion. Node cities with Shanghai as the core downstream continue to maintain a high level of expansion. (3) The standard deviation ellipse analysis shows that the overall center of gravity of the Yangtze River basin city is located in Anqing City, Anhui Province, and it showed a phenomenon of reciprocating movement from 1992 to 2012. The nighttime standard deviation ellipse distribution range increased from 61.96 km² to 76.52 km². The growth of the major axis of the ellipse was significantly larger than that of the minor axis. It had obvious east-west axiality, in which the nighttime lights in the downstream area occupied in the entire luminosity scale urban system leading position.

Keywords: urban space, support vector machine, spatial characteristics, night lights, Yangtze River Economic Belt

Procedia PDF Downloads 114
29 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements

Authors: Dragan Ribarić

Abstract:

We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.

Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements

Procedia PDF Downloads 312
28 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 155
27 Gendered Mobility: Deep Distributions in Urban Transport Systems in Delhi

Authors: Nidhi Prabha

Abstract:

Transportation as a sector is one of the most significant infrastructural elements of the ‘urban.' The distinctness of an urban life in a city is marked by the dynamic movements that it enables within the city-space. Therefore it is important to study the public-transport systems that enable and foster mobility which characterizes the urban. It is also crucial to underscore the way one is examining the urban transport systems - either as an infrastructural unit in a strict physical-structural sense or as a structural unit which acts as a prism refracting multiple experiences depending on the location of the ‘commuter.' In the proposed paper, the attempt is to uncover and investigate the assumption of the neuter-commuter by looking at urban transportation in the secondary sense i.e. as a structural unit which is experienced differently by different kinds of commuters, thus making transportation deeply distributed with various social structures and locations like class or gender which map onto the transport systems. To this end, the public-transit systems operating in Urban Delhi i.e. the Delhi Metros and the Delhi Transport Corporation run public-buses are looked at as case studies. The study is premised on the knowledge and data gained from both primary and secondary sources. Primary sources include data and knowledge collected from fieldwork, the methodology for which has ranged from adopting ‘mixed-methods’ which is ‘Qualitative-then-Quantitative’ as well as borrowing ethnographic techniques. Apart from fieldwork, other primary sources looked at including Annual Reports and policy documents of the Delhi Metro Rail Corporation (DMRC) and the Delhi Transport Corporation (DTC), Union and Delhi budgets, Economic Survey of Delhi, press releases, etc. Secondary sources include the vast array of literature available on the critical nodes that inform the research like gender, transport geographies, urban-space, etc. The study indicates a deeply-distributed urban transport system wherein the various social-structural locations or different kinds of commuters map onto the way these different commuters experience mobility or movement within the city space. Mobility or movement, therefore, becomes gendered or has class-based ramifications. The neuter-commuter assumption is thus challenged. Such an understanding enables us to challenge the anonymity which the ‘urban’ otherwise claims it provides over the rural. The rural is opposed to the urban wherein urban ushers a modern way of life, breaking ties of traditional social identities. A careful study of the transport systems through the traveling patterns and choices of the commuters, however, indicate that this does not hold true as even the same ‘public-space’ of the transport systems allocates different places to different kinds of commuters. The central argument made though the research done is therefore that infrastructure like urban-transport-systems has to be studied and examined as seen beyond just a physical structure. The various experiences of daily mobility of different kinds of commuters have to be taken into account in order to design and plan more inclusive transport systems.

Keywords: gender, infrastructure, mobility, urban-transport-systems

Procedia PDF Downloads 226
26 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 87
25 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen

Authors: Halina Veloso e Zarate, Manuela Triggianese

Abstract:

The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.

Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design

Procedia PDF Downloads 84
24 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 106
23 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks

Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska

Abstract:

Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.

Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell

Procedia PDF Downloads 162
22 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.

Authors: Narjes Abbasabadi

Abstract:

As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.

Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)

Procedia PDF Downloads 260
21 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 53
20 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
19 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport

Authors: Aditya Purohit, Neha Bansal

Abstract:

Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.

Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport

Procedia PDF Downloads 197
18 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 23
17 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
16 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 89
15 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 355
14 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 115
13 Restoration of a Forest Catchment in Himachal Pradesh, India: An Institutional Analysis

Authors: Sakshi Gupta, Kavita Sardana

Abstract:

Management of a forest catchment involves diverse dimensions, multiple stakeholders, and conflicting interests, primarily due to the wide variety of valuable ecosystem services offered by it. Often, the coordination among different levels of formal institutions governing the catchment, local communities, as well as societal norms, taboos, customs and practices, happens to be amiss, leading to conflicting policy interventions which prove detrimental for such resources. In the case of Ala Catchment, which is a protected forest located at a distance of 9 km North-East of the town of Dalhousie, within district Chamba of Himachal Pradesh, India, and serves as one of the primary sources of public water supply for the downstream town of Dalhousie and nearby areas, several policy measures have been adopted for the restoration of the forest catchment, as well as for the improvement of public water supply. These catchment forest restoration measures include; the installation of a fence along the perimeter of the catchment, plantation of trees in the empty patches of the forest, construction of check dams, contour trenches, contour bunds, issuance of grazing permits, and installation of check posts to keep track of trespassers. While the measures adopted to address the acute shortage of public water supply in the Dalhousie region include; building and maintenance of large capacity water storage tanks, laying of pipelines, expanding public water distribution infrastructure to include water sources other than Ala Catchment Forest and introducing of five new water supply schemes for drinking water as well as irrigation. However, despite these policy measures, the degradation of the Ala catchment and acute shortage of water supply continue to distress the region. This study attempts to conduct an institutional analysis to assess the impact of policy measures for the restoration of the Ala Catchment in the Chamba district of Himachal Pradesh in India. For this purpose, the theoretical framework of Ostrom’s Institutional Assessment and Development (IAD) Framework was used. Snowball sampling was used to conduct private interviews and focused group discussions. A semi-structured questionnaire was administered to interview a total of 184 respondents across stakeholders from both formal and informal institutions. The central hypothesis of the study is that the interplay of formal and informal institutions facilitates the implementation of policy measures for ameliorating Ala Catchment, in turn improving the livelihood of people depending on this forest catchment for direct and indirect benefits. The findings of the study suggest that leakages in the successful implementation of policy measures occur at several nodes of decision-making, which adversely impact the catchment and the ecosystem services provided by it. Some of the key reasons diagnosed by the immediate analysis include; ad-hoc assignment of property rights, rise in tourist inflow increasing the pressures on water demand, illegal trespassing by local and nomadic pastoral communities for grazing and unlawful extraction of forest products, and rent-seeking by a few influential formal institutions. Consequently, it is indicated that the interplay of formal and informal institutions may be obscuring the consequentiality of the policy measures on the restoration of the catchment.

Keywords: catchment forest restoration, institutional analysis and development framework, institutional interplay, protected forest, water supply management

Procedia PDF Downloads 97
12 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle

Authors: Yury S. Shpanskiy, Boris V. Kuteev

Abstract:

Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.

Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle

Procedia PDF Downloads 147
11 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle

Procedia PDF Downloads 418
10 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 146
9 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 31
8 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 133
7 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 181
6 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 76
5 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 154