Search results for: anchor nodes
31 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy
Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny
Abstract:
Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy
Procedia PDF Downloads 8730 A Case of Myelofibrosis-Related Arthropathy: A Rare and Underrecognized Entity
Authors: Geum Yeon Sim, Jasal Patel, Anand Kumthekar, Stanley Wainapel
Abstract:
A 65-year-old right-hand dominant African-American man, formerly employed as a security guard, was referred to Rehabilitation Medicine with bilateral hand stiffness and weakness. His past medical history was only significant for myelofibrosis, diagnosed 4 years earlier, for which he was receiving scheduled blood transfusions. Approximately 2 years ago, he began to notice stiffness and swelling in his non-dominant hand that progressed to pain and decreased strength, limiting his hand function. Similar but milder symptoms developed in his right hand several months later. There was no history of prior injury or exposure to cold. Physical examination showed enlargement of metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints with finger flexion contractures, Swan-neck and Boutonniere deformities, and associated joint tenderness. Changes were more prominent in the left hand. X-rays showed mild osteoarthritis of several bilateral PIP joints. Anti-nuclear antibodies, rheumatoid factor, and cyclic citrullinated peptide antibodies were negative. MRI of the hand showed no erosions or synovitis. A rheumatology consultation was obtained, and the cause of his symptoms was attributed to myelofibrosis-related arthropathy with secondary osteoarthritis. The patient was tried on diclofenac cream and received a few courses of Occupational Therapy with limited functional improvement. Primary myelofibrosis (PMF) is a rare myeloproliferative neoplasm characterized by clonal proliferation of myeloid cells with variable morphologic maturity and hematopoietic efficiency. Rheumatic manifestations of malignancies include direct invasion, paraneoplastic presentations, secondary gout, or hypertrophic osteoarthropathy. PMF causes gradual bone marrow fibrosis with extramedullary metaplastic hematopoiesis in the liver, spleen, or lymph nodes. Musculoskeletal symptoms are not common and are not well described in the literature. The first reported case of myelofibrosis related arthritis was seronegative arthritis due to synovial invasion of myeloproliferative elements. Myelofibrosis has been associated with autoimmune diseases such as systemic lupus erythematosus, progressive systemic sclerosis, and rheumatoid arthritis. Gout has been reported in patients with myelofibrosis, and the underlying mechanism is thought to be related to the high turnover of nucleic acids that is greatly augmented in this disease. X-ray findings in these patients usually include erosive arthritis with synovitis. Treatment of underlying PMF is the treatment of choice, along with anti-inflammatory medications. Physicians should be cognizant of recognizing this rare entity in patients with PMF while maintaining clinical suspicion for more common causes of joint deformities, such as rheumatic diseases.Keywords: myelofibrosis, arthritis, arthralgia, malignancy
Procedia PDF Downloads 10129 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen
Authors: Halina Veloso e Zarate, Manuela Triggianese
Abstract:
The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design
Procedia PDF Downloads 8528 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 11027 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks
Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska
Abstract:
Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell
Procedia PDF Downloads 16326 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.
Authors: Narjes Abbasabadi
Abstract:
As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)
Procedia PDF Downloads 26125 A Mother’s Silent Adversary: A Case of Pregnant Woman with Cervical Cancer
Authors: Paola Millare, Nelinda Catherine Pangilinan
Abstract:
Background and Aim: Cervical cancer is the most commonly diagnosed gynecological malignancy during pregnancy. Owing to the rarity of the disease, and the complexity of all factors that have to be taken into consideration, standardization of treatment is very difficult. Cervical cancer is the second most common malignancy among women. The treatment of cancer during pregnancy is most challenging in the case of cervical cancer, since the pregnant uterus itself is affected. This report aims to present a case of cervical cancer in a pregnant woman and how to manage this case and several issues accompanied with it. Methods: This is a case of a 28 year-old, Gravida 4 Para 2 (1111), who presented with watery to mucoid, whitish, non-foul smelling and increasing in amount. Internal examination revealed normal external genitalia, parous outlet, cervix was transformed into a fungating mass measuring 5x4 cm, with left parametrial involvement, body of uterus was enlarged to 24 weeks size, no adnexal mass or tenderness. She had cervical punch biopsy, which revealed, adenocarcinoma, well-differentiated cervical tissue. Standard management for cases with stage 2B cervical carcinoma was to start radiation or radical hysterectomy. In the case of patients diagnosed with cervical cancer and currently pregnant, these kind of management will result to fetal loss. The patient still declined the said management and opted to delay the treatment and wait for her baby to reach at least term and proceed to cesarean section as route of delivery. Results: The patient underwent an elective cesarean section at 37th weeks age of gestation, with an outcome of a term, live baby boy APGAR score 7,9 birthweight 2600 grams. One month postpartum, the patient followed up and completed radiotherapy, chemotherapy and brachytherapy. She was advised to go back after 6 months for monitoring. On her last check up, an internal examination was done which revealed normal external genitalia, vagina admits 2 fingers with ease, there is a palpable fungating mass at the cervix measuring 2x2 cm. A repeat gynecologic oncologic ultrasound was done revealing cervical mass, endophytic, grade 1 color score with stromal invasion 35% post radiation reactive lymph nodes with intact paracolpium, pericervical, and parametrial involvement. The patient was then advised to undergo pelvic boost and for close monitoring of the cervical mass. Conclusion: Cervical cancer in pregnancy is rare but is a dilemma for women and their physicians. Treatment should be multidisciplinary and individualized following careful counseling. In this case, the treatment was clearly on the side of preventing the progression of cervical cancer while she is pregnant, however due to ethical reasons, the management deviates on the right of the patient to decide for her own health and her unborn child. The collaborative collection of data relating to treatment and outcome is strongly encouraged.Keywords: cancer, cervical, ethical, pregnancy
Procedia PDF Downloads 24524 Investigating the Neural Heterogeneity of Developmental Dyscalculia
Authors: Fengjuan Wang, Azilawati Jamaludin
Abstract:
Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity
Procedia PDF Downloads 5323 Head and Neck Extranodal Rosai-Dorfman Disease- Utility of immunohistochemistry
Authors: Beverly Wang
Abstract:
Background: Rosai-Dorfman disease (RDD), aka sinus histiocytosis with massive lymphadenopathy, is a rare, idiopathic histiocytic proliferative disorder. Although RDD can be seen involving the head and neck lymph nodes, rarely it can affect other extranodal sites. It present 3 unique cases of RDD affecting the nasal cavity, paranasal sinuses, and ear canal. The initial clinical presentation on two cases mimicked a malignant neoplasm. The 3rd case of RDD co-existed with a cholesteatoma of the ear canal. The clinical presentation, histology and immunohistochemical stains, and radiographic findings are discussed. Design: An overview of 3 cases of RDD affected sinonasal cavity and ear canal from UCI Medical Center was conducted. Case 1: A 61 year old male complaining of breathing difficulty presented with bilateral polypoid sinonasal masses and severe nasal obstruction. The masses elevated the nasal floor, and involved the anterior nasal septum to lateral wall. It was endoscopically excised. At intraoperative consultation, frozen section reported a pleomorphic spindle cell neoplasm with scattered large atypical spindle cells, resembling a high grade sarcoma. Case 2: A 46 year old male presented with recurrent bilateral maxillary chronic sinusitis with mass formation, clinically suspicious for malignant lymphoma. Excisional tissue sample showed large irregular spindled histiocytes with abundant granular and vacuolated cytoplasm. Case 3: A 36 year old female with a history of asthma initially presented with left-sided chronic otalgia, occasional nausea, vertigo, and fluctuating pain exacerbated by head movement and temperature changes. CT scan revealed an external auditory canal mass extending to the middle ear, coexisting with a small cholesteatoma. Results: The morphology of all cases revealed large atypical spindled histiocytes resembling fibrohistiocytic or myofibroblastic proliferative neoplasms. Scattered emperipolesis was seen. All 3 cases were confirmed as extranodal sinus RDD, confirmed by immunohistochemistry. The large atypical cells were positive for S100, CD68, and CD163. No evidence for malignancy was identified. Case 3 showed concurrent RDD co-existing with a cholesteatoma. Conclusion: Due to its rarity and variable clinical presentations, the diagnosis of RDD is seldom clinically considered. Extranodal sinus RDD morphologically can be pitfall as mimicker of spindly neoplasm, especially at intraoperative consultation. It can create diagnostic and therapeutic challenges. Correlation of radiological findings with histologic features will help to reach the diagnosis.Keywords: head and neck, extranodal, rosai-dorfman disease, mimicker, immunohistochemistry
Procedia PDF Downloads 8122 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 14021 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport
Authors: Aditya Purohit, Neha Bansal
Abstract:
Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport
Procedia PDF Downloads 19820 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 2519 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 17218 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 8917 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure
Authors: Sara Saboonian, Pierre Filion
Abstract:
The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.Keywords: ecosystem services, green infrastructure, intensification, planning
Procedia PDF Downloads 35616 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 11615 Restoration of a Forest Catchment in Himachal Pradesh, India: An Institutional Analysis
Authors: Sakshi Gupta, Kavita Sardana
Abstract:
Management of a forest catchment involves diverse dimensions, multiple stakeholders, and conflicting interests, primarily due to the wide variety of valuable ecosystem services offered by it. Often, the coordination among different levels of formal institutions governing the catchment, local communities, as well as societal norms, taboos, customs and practices, happens to be amiss, leading to conflicting policy interventions which prove detrimental for such resources. In the case of Ala Catchment, which is a protected forest located at a distance of 9 km North-East of the town of Dalhousie, within district Chamba of Himachal Pradesh, India, and serves as one of the primary sources of public water supply for the downstream town of Dalhousie and nearby areas, several policy measures have been adopted for the restoration of the forest catchment, as well as for the improvement of public water supply. These catchment forest restoration measures include; the installation of a fence along the perimeter of the catchment, plantation of trees in the empty patches of the forest, construction of check dams, contour trenches, contour bunds, issuance of grazing permits, and installation of check posts to keep track of trespassers. While the measures adopted to address the acute shortage of public water supply in the Dalhousie region include; building and maintenance of large capacity water storage tanks, laying of pipelines, expanding public water distribution infrastructure to include water sources other than Ala Catchment Forest and introducing of five new water supply schemes for drinking water as well as irrigation. However, despite these policy measures, the degradation of the Ala catchment and acute shortage of water supply continue to distress the region. This study attempts to conduct an institutional analysis to assess the impact of policy measures for the restoration of the Ala Catchment in the Chamba district of Himachal Pradesh in India. For this purpose, the theoretical framework of Ostrom’s Institutional Assessment and Development (IAD) Framework was used. Snowball sampling was used to conduct private interviews and focused group discussions. A semi-structured questionnaire was administered to interview a total of 184 respondents across stakeholders from both formal and informal institutions. The central hypothesis of the study is that the interplay of formal and informal institutions facilitates the implementation of policy measures for ameliorating Ala Catchment, in turn improving the livelihood of people depending on this forest catchment for direct and indirect benefits. The findings of the study suggest that leakages in the successful implementation of policy measures occur at several nodes of decision-making, which adversely impact the catchment and the ecosystem services provided by it. Some of the key reasons diagnosed by the immediate analysis include; ad-hoc assignment of property rights, rise in tourist inflow increasing the pressures on water demand, illegal trespassing by local and nomadic pastoral communities for grazing and unlawful extraction of forest products, and rent-seeking by a few influential formal institutions. Consequently, it is indicated that the interplay of formal and informal institutions may be obscuring the consequentiality of the policy measures on the restoration of the catchment.Keywords: catchment forest restoration, institutional analysis and development framework, institutional interplay, protected forest, water supply management
Procedia PDF Downloads 9814 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle
Authors: Yury S. Shpanskiy, Boris V. Kuteev
Abstract:
Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle
Procedia PDF Downloads 14713 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading
Authors: Vaso K. Kapnopoulou, Piero Caridis
Abstract:
The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle
Procedia PDF Downloads 41912 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 14611 Sensitivity and Specificity of Some Serological Tests Used for Diagnosis of Bovine Brucellosis in Egypt on Bacteriological and Molecular Basis
Authors: Hosein I. Hosein, Ragab Azzam, Ahmed M. S. Menshawy, Sherin Rouby, Khaled Hendy, Ayman Mahrous, Hany Hussien
Abstract:
Brucellosis is a highly contagious bacterial zoonotic disease of a worldwide spread and has different names; Infectious or enzootic abortion and Bang's disease in animals; and Mediterranean or Malta fever, Undulant Fever and Rock fever in humans. It is caused by the different species of genus Brucella which is a Gram-negative, aerobic, non-spore forming, facultative intracellular bacterium. Brucella affects a wide range of mammals including bovines, small ruminants, pigs, equines, rodents, marine mammals as well as human resulting in serious economic losses in animal populations. In human, Brucella causes a severe illness representing a great public health problem. The disease was reported in Egypt for the first time in 1939; since then the disease remained endemic at high levels among cattle, buffalo, sheep and goat and is still representing a public health hazard. The annual economic losses due to brucellosis were estimated to be about 60 million Egyptian pounds yearly, but actual estimates are still missing despite almost 30 years of implementation of the Egyptian control programme. Despite being the gold standard, bacterial isolation has been reported to show poor sensitivity for samples with low-level of Brucella and is impractical for regular screening of large populations. Thus, serological tests still remain the corner stone for routine diagnosis of brucellosis, especially in developing countries. In the present study, a total of 1533 cows (256 from Beni-Suef Governorate, 445 from Al-Fayoum Governorate and 832 from Damietta Governorate), were employed for estimation of relative sensitivity, relative specificity, positive predictive value and negative predictive value of buffered acidified plate antigen test (BPAT), rose bengal test (RBT) and complement fixation test (CFT). The overall seroprevalence of brucellosis revealed (19.63%). Relative sensitivity, relative specificity, positive predictive value and negative predictive value of BPAT,RBT and CFT were estimated as, (96.27 %, 96.76 %, 87.65 % and 99.10 %), (93.42 %, 96.27 %, 90.16 % and 98.35%) and (89.30 %, 98.60 %, 94.35 %and 97.24 %) respectively. BPAT showed the highest sensitivity among the three employed serological tests. RBT was less specific than BPAT. CFT showed the least sensitivity 89.30 % among the three employed serological tests but showed the highest specificity. Different tissues specimens of 22 seropositive cows (spleen, retropharyngeal udder, and supra-mammary lymph nodes) were subjected for bacteriological studies for isolation and identification of Brucella organisms. Brucella melitensis biovar 3 could be recovered from 12 (54.55%) cows. Bacteriological examinations failed to classify 10 cases (45.45%) and were culture negative. Bruce-ladder PCR was carried out for molecular identification of the 12 Brucella isolates at the species level. Three fragments of 587 bp, 1071 bp and 1682 bp sizes were amplified indicating Brucella melitensis. The results indicated the importance of using several procedures to overcome the problem of escaping of some infected animals from diagnosis.Bruce-ladder PCR is an important tool for diagnosis and epidemiologic studies, providing relevant information for identification of Brucella spp.Keywords: brucellosis, relative sensitivity, relative specificity, Bruce-ladder, Egypt
Procedia PDF Downloads 35610 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 359 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1358 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 1827 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 786 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles
Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis
Abstract:
E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics
Procedia PDF Downloads 1545 Case Report of a Secretory Carcinoma of the Salivary Gland: Clinical Management Following High-Grade Transformation
Authors: Wissam Saliba, Mandy Nicholson
Abstract:
Secretory carcinoma (SC) is a rare type of salivary gland cancer. It was first realized as a distinct type of malignancy in 2010and wasinitially termed “mammary analogue secretory carcinoma” because of similarities with secretory breast cancer. The name was later changed to SC. Most SCs originate in parotid glands, and most harbour a rare gene mutation: ETV6-NTRK3. This mutation is rare in common cancers and common in rare cancers; it is present in most secretory carcinomas. Disease outcomes for SC are usually described as favourable as many cases of SC are lowgrade (LG), and cancer growth is slow. In early stages, localized therapy is usually indicated (surgery and/or radiation). Despitea favourable prognosis, a sub-set of casescan be much more aggressive.These cases tend to be of high-grade(HG).HG casesare associated with a poorer prognosis.Management of such cases can be challenging due to limited evidence for effective systemic therapy options. This case report describes the clinical management of a 46-year-oldmale patient with a unique case of SC. He was initially diagnosed with a low/intermediate grade carcinoma of the left parotid gland in 2009; he was treated with surgery and adjuvant radiation. Surgical pathology favoured primary salivary adenocarcinoma, and 2 lymph nodes were positive for malignancy. SC was not yet realized as a distinct type of cancerat the time of diagnosis, and the pathology reportvalidated this gap by stating that the specimen lacked features of the defined types of salivary carcinoma.Slow-growing pulmonary nodules were identified in 2017. In 2020, approximately 11 years after the initial diagnosis, the patient presented with malignant pleural effusion. Pathology from a pleural biopsy was consistent with metastatic poorly differentiated cancer of likely parotid origin, likely mammary analogue secretory carcinoma. The specimen was sent for Next Generation Sequencing (NGS); ETV6-NTRK3 gene fusion was confirmed, and systemic therapy was initiated.One cycle ofcarboplatin/paclitaxel was given in June 2020. He was switched to Larotrectinib (NTRK inhibitor (NTRKi)) later that month. Larotrectinib continued for approximately 9 months, with discontinuation in March 2021 due to disease progression. A second-generation NTRKi (Selitrectinib) was accessed and prescribedthrough a single patient study. Selitrectinib was well tolerated. The patient experienced a complete radiological response within~4 months. Disease progression occurred once again in October 2021. Progression was slow, and Selitrectinib continuedwhile the medical team performed a thorough search for additional treatment options. In January 2022, a liver lesion biopsy was performed, and NGS showed an NTRKG623R solvent-front resistance mutation. Various treatment pathways were considered. The patient pursuedanother investigational NTRKi through a clinical trial, and Selitrectinib was discontinued in July 2022. Excellent performance status was maintained throughout the entire course of treatment.It can be concluded that NTRK inhibitors provided satisfactory treatment efficacy and tolerance for this patient with high-grade transformation and NTRK gene fusion cancer. In the future, more clinical research is needed on systemic treatment options for high-grade transformations in NTRK gene fusion SCs.Keywords: secretory carcinoma, high-grade transformations, NTRK gene fusion, NTRK inhibitor
Procedia PDF Downloads 1084 Closing down the Loop Holes: How North Korea and Other Bad Actors Manipulate Global Trade in Their Favor
Authors: Leo Byrne, Neil Watts
Abstract:
In the complex and evolving landscape of global trade, maritime sanctions emerge as a critical tool wielded by the international community to curb illegal activities and alter the behavior of non-compliant states and entities. These sanctions, designed to restrict or prohibit trade by sea with sanctioned jurisdictions, entities, or individuals, face continuous challenges due to the sophisticated evasion tactics employed by countries like North Korea. As the Democratic People's Republic of Korea (DPRK) diverts significant resources to circumvent these measures, understanding the nuances of their methodologies becomes imperative for maintaining the integrity of global trade systems. The DPRK, one of the most sanctioned nations globally, has developed an intricate network to facilitate its trade in illicit goods, ensuring the flow of revenue from designated activities continues unabated. Given its geographic and economic conditions, North Korea predominantly relies on maritime routes, utilizing foreign ports to route its illicit trade. This reliance on the sea is exploited through various sophisticated methods, including the use of front companies, falsification of documentation, commingling of bulk cargos, and physical alterations to vessels. These tactics enable the DPRK to navigate through the gaps in regulatory frameworks and lax oversight, effectively undermining international sanctions regimes Maritime sanctions carry significant implications for global trade, imposing heightened risks in the maritime domain. The deceptive practices employed not only by the DPRK but also by other high-risk jurisdictions, necessitate a comprehensive understanding of UN targeted sanctions. For stakeholders in the maritime sector—including maritime authorities, vessel owners, shipping companies, flag registries, and financial institutions serving the shipping industry—awareness and compliance are paramount. Violations can lead to severe consequences, including reputational damage, sanctions, hefty fines, and even imprisonment. To mitigate risks associated with these deceptive practices, it is crucial for maritime sector stakeholders to employ rigorous due diligence and regulatory compliance screening measures. Effective sanctions compliance serves as a protective shield against legal, financial, and reputational risks, preventing exploitation by international bad actors. This requires not only a deep understanding of the sanctions landscape but also the capability to identify and manage risks through informed decision-making and proactive risk management practices. As the DPRK and other sanctioned entities continue to evolve their sanctions evasion tactics, the international community must enhance its collective efforts to demystify and counter these practices. By leveraging more stringent compliance measures, stakeholders can safeguard against the illicit use of the maritime domain, reinforcing the effectiveness of maritime sanctions as a tool for global security. This paper seeks to dissect North Korea's adaptive strategies in the face of maritime sanctions. By examining up-to-date, geographically, and temporally relevant case studies, it aims to shed light on the primary nodes through which Pyongyang evades sanctions and smuggles goods via third-party ports. The goal is to propose multi-level interaction strategies, ranging from governmental interventions to localized enforcement mechanisms, to counteract these evasion tactics.Keywords: maritime, maritime sanctions, international sanctions, compliance, risk
Procedia PDF Downloads 713 Metal-Organic Frameworks-Based Materials for Volatile Organic Compounds Sensing Applications: Strategies to Improve Sensing Performances
Authors: Claudio Clemente, Valentina Gargiulo, Alessio Occhicone, Giovanni Piero Pepe, Giovanni Ausanio, Michela Alfè
Abstract:
Volatile organic compound (VOC) emissions represent a serious risk to human health and the integrity of the ecosystems, especially at high concentrations. For this reason, it is very important to continuously monitor environmental quality and develop fast and reliable portable sensors to allow analysis on site. Chemiresistors have become promising candidates for VOC sensing as their ease of fabrication, variety of suitable sensitive materials, and simple sensing data. A chemoresistive gas sensor is a transducer that allows to measure the concentration of an analyte in the gas phase because the changes in resistance are proportional to the amount of the analyte present. The selection of the sensitive material, which interacts with the target analyte, is very important for the sensor performance. The most used VOC detection materials are metal oxides (MOx) for their rapid recovery, high sensitivity to various gas molecules, easy fabrication. Their sensing performance can be improved in terms of operating temperature, selectivity, and detection limit. Metal-organic frameworks (MOFs) have attracted a lot of attention also in the field of gas sensing due to their high porosity, high surface area, tunable morphologies, structural variety. MOFs are generated by the self-assembly of multidentate organic ligands connecting with adjacent multivalent metal nodes via strong coordination interactions, producing stable and highly ordered crystalline porous materials with well-designed structures. However, most MOFs intrinsically exhibit low electrical conductivity. To improve this property, MOFs can be combined with organic and inorganic materials in a hybrid fashion to produce composite materials or can be transformed into more stable structures. MOFs, indeed, can be employed as the precursors of metal oxides with well-designed architectures via the calcination method. The MOF-derived MOx partially preserved the original structure with high surface area and intrinsic open pores, which act as trapping centers for gas molecules, and showed a higher electrical conductivity. Core-shell heterostructures, in which the surface of a metal oxide core is completely coated by a MOF shell, forming a junction at the core-shell heterointerface, can also be synthesized. Also, nanocomposite in which MOF structures are intercalated with graphene related materials can also be produced, and the conductivity increases thanks to the high mobility of electrons of carbon materials. As MOF structures, zinc-based MOFs belonging to the ZIF family were selected in this work. Several Zn-based materials based and/or derived from MOFs were produced, structurally characterized, and arranged in a chemo resistive architecture, also exploring the potentiality of different approaches of sensing layer deposition based on PLD (pulsed laser deposition) and, in case of thermally labile materials, MAPLE (Matrix Assisted Pulsed Laser Evaporation) to enhance the adhesion to the support. The sensors were tested in a controlled humidity chamber, allowing for the possibility of varying the concentration of ethanol, a typical analyte chosen among the VOCs for a first survey. The effect of heating the chemiresistor to improve sensing performances was also explored. Future research will focus on exploring new manufacturing processes for MOF-based gas sensors with the aim to improve sensitivity, selectivity and reduce operating temperatures.Keywords: chemiresistors, gas sensors, graphene related materials, laser deposition, MAPLE, metal-organic frameworks, metal oxides, nanocomposites, sensing performance, transduction mechanism, volatile organic compounds
Procedia PDF Downloads 642 Salmon Diseases Connectivity between Fish Farm Management Areas in Chile
Authors: Pablo Reche
Abstract:
Since 1980’s aquaculture has become the biggest economic activity in southern Chile, being Salmo salar and Oncorhynchus mykiss the main finfish species. High fish density makes both species prone to contract diseases, what drives the industry to big losses, affecting greatly the local economy. Three are the most concerning infective agents, the infectious salmon anemia virus (ISAv), the bacteria Piscirickettsia salmonis and the copepod Caligus rogercresseyi. To regulate the industry the government arranged the salmon farms within management areas named as barrios, which coordinate the fallowing periods and antibiotics treatments of their salmon farms. In turn, barrios are gathered into larger management areas, named as macrozonas whose purpose is to minimize the risk of disease transmission between them and to enclose the outbreaks within their boundaries. However, disease outbreaks still happen and transmission to neighbor sites enlarges the initial event. Salmon disease agents are mostly transported passively by local currents. Thus, to understand how transmission occurs it must be firstly studied the physical environment. In Chile, salmon farming takes place in the inner seas of the southernmost regions of western Patagonia, between 41.5ºS-55ºS. This coastal marine system is characterised by western winds, latitudinally modulated by the position of the South-Eats Pacific high-pressure centre, high precipitation rates and freshwater inflows from the numerous glaciers (including the largest ice cap out of Antarctic and Greenland). All of these forcings meet in a complex bathymetry and coastline system - deep fjords, shallow sills, narrow straits, channels, archipelagos, inlets, and isolated inner seas- driving an estuarine circulation (fast outflows westwards on surface and slow deeper inflows eastwards). Such a complex system is modelled on the numerical model MIKE3, upon whose 3D current fields particle-track-biological models (one for each infective agent) are decoupled. Each agent biology is parameterized by functions for maturation and mortality (reproduction not included). Such parameterizations are depending upon environmental factors, like temperature and salinity, so their lifespan will depend upon the environmental conditions those virtual agents encounter on their way while passively transported. CLIC (Connectivity-Langrangian–IFOP-Chile) is a service platform that supports the graphical visualization of the connectivity matrices calculated from the particle trajectories files resultant of the particle-track-biological models. On CLIC users can select, from a high-resolution grid (~1km), the areas the connectivity will be calculated between them. These areas can be barrios and macrozonas. Users also can select what nodes of these areas are allowed to release and scatter particles from, depth and frequency of the initial particle release, climatic scenario (winter/summer) and type of particle (ISAv, Piscirickettsia salmonis, Caligus rogercresseyi plus an option for lifeless particles). Results include probabilities downstream (where the particles go) and upstream (where the particles come from), particle age and vertical distribution, all of them aiming to understand how currently connectivity works to eventually propose a minimum risk zonation for aquaculture purpose. Preliminary results in Chiloe inner sea shows that the risk depends not only upon dynamic conditions but upon barrios location with respect to their neighbors.Keywords: aquaculture zonation, Caligus rogercresseyi, Chilean Patagonia, coastal oceanography, connectivity, infectious salmon anemia virus, Piscirickettsia salmonis
Procedia PDF Downloads 157