Search results for: civil structures
252 Influence of Packing Density of Layers Placed in Specific Order in Composite Nonwoven Structure for Improved Filtration Performance
Authors: Saiyed M Ishtiaque, Priyal Dixit
Abstract:
Objectives: An approach is being suggested to design the filter media to maximize the filtration efficiency with minimum possible pressure drop of composite nonwoven by incorporating the layers of different packing densities induced by fibre of different deniers and punching parameters by using the concept of sequential punching technique in specific order in layered composite nonwoven structure. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layer of differently oriented fibres influenced by fibres of different deniers and punching parameters in various combinations to minimize the pressure drop at maximum possible filtration efficiency. Methodology Used: This work involves preparation of needle punched layered structure with batts 100g/m2 basis weight having fibre denier, punch density and needle penetration depth as variables to produce 300 g/m2 basis weight nonwoven composite. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layers of differently oriented fibres influenced by considered variables in various combinations. to minimize the pressure drop at maximum possible filtration efficiencyFor developing layered nonwoven fabrics, batts made of fibre of different deniers having 100g/m2 each basis weight were placed in various combinations. For second set of experiment, the composite nonwoven fabrics were prepared by using 3 denier circular cross section polyester fibre having 64 mm length on needle punched nonwoven machine by using the sequential punching technique to prepare the composite nonwoven fabrics. In this technique, three semi punched fabrics of 100 g/m2 each having either different punch densities or needle penetration depths were prepared for first phase of fabric preparation. These fabrics were later punched altogether to obtain the overall basis weight of 300 g/m2. The total punch density of the composite nonwoven fabric was kept at 200 punches/ cm2 with a needle penetration depth of 10 mm. The layered structures so formed were subcategorised into two groups- homogeneous layered structure in which all the three batts comprising the nonwoven fabric were made from same denier of fibre, punch density and needle penetration depth and were placed in different positions in respective fabric and heterogeneous layered structure in which batts were made from fibres of different deniers, punch densities and needle penetration depths and were placed in different positions. Contributions: The results concluded that reduction in pressure drop is not derived by the overall packing density of the layered nonwoven fabric rather sequencing of layers of specific packing density in layered structure decides the pressure drop. Accordingly, creation of inverse gradient of packing density in layered structure provided maximum filtration efficiency with least pressure drop. This study paves the way for the possibility of customising the composite nonwoven fabrics by the incorporation of differently oriented fibres in constituent layers induced by considered variablres for desired filtration properties.Keywords: filtration efficiency, layered nonwoven structure, packing density, pressure drop
Procedia PDF Downloads 76251 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 140250 Urban Heat Islands Analysis of Matera, Italy Based on the Change of Land Cover Using Satellite Landsat Images from 2000 to 2017
Authors: Giuseppina Anna Giorgio, Angela Lorusso, Maria Ragosta, Vito Telesca
Abstract:
Climate change is a major public health threat due to the effects of extreme weather events on human health and on quality of life in general. In this context, mean temperatures are increasing, in particular, extreme temperatures, with heat waves becoming more frequent, more intense, and longer lasting. In many cities, extreme heat waves have drastically increased, giving rise to so-called Urban Heat Island (UHI) phenomenon. In an urban centre, maximum temperatures may be up to 10° C warmer, due to different local atmospheric conditions. UHI occurs in the metropolitan areas as function of the population size and density of a city. It consists of a significant difference in temperature compared to the rural/suburban areas. Increasing industrialization and urbanization have increased this phenomenon and it has recently also been detected in small cities. Weather conditions and land use are one of the key parameters in the formation of UHI. In particular surface urban heat island is directly related to temperatures, to land surface types and surface modifications. The present study concern a UHI analysis of Matera city (Italy) based on the analysis of temperature, change in land use and land cover, using Corine Land Cover maps and satellite Landsat images. Matera, located in Southern Italy, has a typical Mediterranean climate with mild winters and hot and humid summers. Moreover, Matera has been awarded the international title of the 2019 European Capital of Culture. Matera represents a significant example of vernacular architecture. The structure of the city is articulated by a vertical succession of dug layers sometimes excavated or partly excavated and partly built, according to the original shape and height of the calcarenitic slope. In this study, two meteorological stations were selected: MTA (MaTera Alsia, in industrial zone) and MTCP (MaTera Civil Protection, suburban area located in a green zone). In order to evaluate the increase in temperatures (in terms of UHI occurrences) over time, and evaluating the effect of land use on weather conditions, the climate variability of temperatures for both stations was explored. Results show that UHI phenomena is growing in Matera city, with an increase of maximum temperature values at a local scale. Subsequently, spatial analysis was conducted by Landsat satellite images. Four years was selected in the summer period (27/08/2000, 27/07/2006, 11/07/2012, 02/08/2017). In Particular, Landsat 7 ETM+ for 2000, 2006 and 2012 years; Landsat 8 OLI/TIRS for 2017. In order to estimate the LST, Mono Window Algorithm was applied. Therefore, the increase of LST values spatial scale trend has been verified, in according to results obtained at local scale. Finally, the analysis of land use maps over the years by the LST and/or the maximum temperatures measured, show that the development of industrialized area produces a corresponding increase in temperatures and consequently a growth in UHI.Keywords: climate variability, land surface temperature, LANDSAT images, urban heat island
Procedia PDF Downloads 126249 A Challenge to Conserve Moklen Ethnic House: Case Study in Tubpla Village, Phang Nga Province, Southern Thailand
Authors: M. Attavanich, H. Kobayashi
Abstract:
Moklen is a sub-group of ethnic minority in Thailand. In the past, they were vagabonds of the sea. Their livelihood relied on the sea but they built temporary shelters to avoid strong wind and waves during monsoon season. Recently, they have permanently settled on land along coastal area and mangrove forest in Phang Nga and Phuket Province, Southern Thailand. Moklen people have their own housing culture: the Moklen ethnic house was built from local natural materials, indicating a unique structure and design. Its wooden structure is joined by rattan ropes. The construction process is very unique because of using body-based unit of measurement for design and construction. However, there are several threats for those unique structures. One of the most important threats on Moklen ethnic house is tsunami. Especially the 2004 Indian Ocean Tsunami caused widely damage to Southern Thailand and Phang Nga province was the most affected area. In that time, Moklen villages which are located along the coastal area also affected calamitously. In order to recover the damage in affected villages, mostly new modern style houses were provided by aid agencies. This process has caused a significant impact on Moklen housing culture. Not only tsunami, but also modernization has an influence on the changing appearance of the Moklen houses and the effect of modernization has been started to experience before the tsunami. As a result, local construction knowledge is very limited nowadays because the number of elderly people in Moklen has been decreasing drastically. Last but not the least, restrictions of construction materials which are originally provided from accessible mangroves, create limitations in building a Moklen house. In particular, after the Reserved Forest Act, wood chopping without any permission has become illegal. These are some of the most important reasons for Moklen ethnic houses to disappear. Nevertheless, according to the results of field surveys done in 2013 in Phang Nga province, it is found out that some Moklen ethnic houses are still available in Tubpla Village, but only a few. Next survey in the same area in 2014 showed that number of Moklen houses in the village has been started to increase significantly. That proves that there is a high potential to conserve Moklen houses. Also the project of our research team in February 2014 contributed to continuation of Moklen ethnic house. With the cooperation of the village leader and our team, it was aimed to construct a Moklen house with the help of local participants. For the project, villagers revealed the building knowledge and techniques, and in the end, project helped community to understand the value of their houses. Also, it was a good opportunity for Moklen children to learn about their culture. In addition, NGOs recently have started to support ecotourism projects in the village. It not only helps to preserve a way of life, but also contributes to preserve indigenous knowledge and techniques of Moklen ethnic house. This kind of supporting activities are important for the conservation of Moklen ethnic houses.Keywords: conservation, construction project, Moklen Ethnic House, 2004 Indian Ocean tsunami
Procedia PDF Downloads 309248 Investigations on the Fatigue Behavior of Welded Details with Imperfections
Authors: Helen Bartsch, Markus Feldmann
Abstract:
The dimensioning of steel structures subject to fatigue loads, such as wind turbines, bridges, masts and towers, crane runways and weirs or components in crane construction, is often dominated by fatigue verification. The fatigue details defined by the welded connections, such as butt or cruciform joints, longitudinal welds, welded-on or welded-in stiffeners, etc., are decisive. In Europe, the verification is usually carried out according to EN 1993-1-9 on a nominal stress basis. The basis is the detailed catalog, which specifies the fatigue strength of the various weld and construction details according to fatigue classes. Until now, a relation between fatigue classes and weld imperfection sizes is not included. Quality levels for imperfections in fusion-welded joints in steel, nickel, titanium and their alloys are regulated in EN ISO 5817, which, however, doesn’t contain direct correlations to fatigue resistances. The question arises whether some imperfections might be tolerable to a certain extent since they may be present in the test data used for detail classifications dating back decades ago. Although current standardization requires proof of satisfying limits of imperfection sizes, it would also be possible to tolerate welds with certain irregularities if these can be reliably quantified by non-destructive testing. Fabricators would be prepared to undertake carefully and sustained weld inspection in view of the significant economic consequences of such unfavorable fatigue classes. This paper presents investigations on the fatigue behavior of common welded details containing imperfections. In contrast to the common nominal stress concept, local fatigue concepts were used to consider the true stress increase, i.e., local stresses at the weld toe and root. The actual shape of a weld comprising imperfections, e.g., gaps or undercuts, can be incorporated into the fatigue evaluation, usually on a numerical basis. With the help of the effective notch stress concept, the fatigue resistance of detailed local weld shapes is assessed. Validated numerical models serve to investigate notch factors of fatigue details with different geometries. By utilizing parametrized ABAQUS routines, detailed numerical studies have been performed. Depending on the shape and size of different weld irregularities, fatigue classes can be defined. As well load-carrying welded details, such as the cruciform joint, as non-load carrying welded details, e.g., welded-on or welded-in stiffeners, are regarded. The investigated imperfections include, among others, undercuts, excessive convexity, incorrect weld toe, excessive asymmetry and insufficient or excessive throat thickness. Comparisons of the impact of different imperfections on the different types of fatigue details are made. Moreover, the influence of a combination of crucial weld imperfections on the fatigue resistance is analyzed. With regard to the trend of increasing efficiency in steel construction, the overall aim of the investigations is to include a more economical differentiation of fatigue details with regard to tolerance sizes. In the long term, the harmonization of design standards, execution standards and regulations of weld imperfections is intended.Keywords: effective notch stress, fatigue, fatigue design, weld imperfections
Procedia PDF Downloads 261247 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility
Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah
Abstract:
Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis
Procedia PDF Downloads 127246 Sugarcane Trash Biochar: Effect of the Temperature in the Porosity
Authors: Gabriela T. Nakashima, Elias R. D. Padilla, Joao L. Barros, Gabriela B. Belini, Hiroyuki Yamamoto, Fabio M. Yamaji
Abstract:
Biochar can be an alternative to use sugarcane trash. Biochar is a solid material obtained from pyrolysis, that is a biomass thermal degradation with low or no O₂ concentration. Pyrolysis transforms the carbon that is commonly found in other organic structures into a carbon with more stability that can resist microbial decomposition. Biochar has a versatility of uses such as soil fertility, carbon sequestration, energy generation, ecological restoration, and soil remediation. Biochar has a great ability to retain water and nutrients in the soil so that this material can improve the efficiency of irrigation and fertilization. The aim of this study was to characterize biochar produced from sugarcane trash in three different pyrolysis temperatures and determine the lowest temperature with the high yield and carbon content. Physical characterization of this biochar was performed to help the evaluation for the best production conditions. Sugarcane (Saccharum officinarum) trash was collected at Corredeira Farm, located in Ibaté, São Paulo State, Brazil. The farm has 800 hectares of planted area with an average yield of 87 t·ha⁻¹. The sugarcane varieties planted on the farm are: RB 855453, RB 867515, RB 855536, SP 803280, SP 813250. Sugarcane trash was dried and crushed into 50 mm pieces. Crucibles and lids were used to settle the sugarcane trash samples. The higher amount of sugarcane trash was added to the crucible to avoid the O₂ concentration. Biochar production was performed in three different pyrolysis temperatures (200°C, 325°C, 450°C) in 2 hours residence time in the muffle furnace. Gravimetric yield of biochar was obtained. Proximate analysis of biochar was done using ASTM E-872 and ABNT NBR 8112. Volatile matter and ash content were calculated by direct weight loss and fixed carbon content calculated by difference. Porosity measurement was evaluated using an automatic gas adsorption device, Autosorb-1, with CO₂ described by Nakatani. Approximately 0.5 g of biochar in 2 mm particle sizes were used for each measurement. Vacuum outgassing was performed as a pre-treatment in different conditions for each biochar temperature. The pore size distribution of micropores was determined using Horváth-Kawazoe method. Biochar presented different colors for each treatment. Biochar - 200°C presented a higher number of pieces with 10mm or more and did not present the dark black color like other treatments after 2 h residence time in muffle furnace. Also, this treatment had the higher content of volatiles and the lower amount of fixed carbon. In porosity analysis, while the temperature treatments increase, the amount of pores also increase. The increase in temperature resulted in a biochar with a better quality. The pores in biochar can help in the soil aeration, adsorption, water retention. Acknowledgment: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil – PROAP-CAPES, PDSE and CAPES - Finance Code 001.Keywords: proximate analysis, pyrolysis, soil amendment, sugarcane straw
Procedia PDF Downloads 214245 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 117244 Safety Considerations of Furanics for Sustainable Applications in Advanced Biorefineries
Authors: Anitha Muralidhara, Victor Engelen, Christophe Len, Pascal Pandard, Guy Marlair
Abstract:
Production of bio-based chemicals and materials from lignocellulosic biomass is gaining tremendous importance in advanced bio-refineries while aiming towards progressive replacement of petroleum based chemicals in transportation fuels and commodity polymers. One such attempt has resulted in the production of key furan derivatives (FD) such as furfural, HMF, MMF etc., via acid catalyzed dehydration (ACD) of C6 and C5 sugars, which are further converted into key chemicals or intermediates (such as Furandicarboxylic acid, Furfuryl alcohol etc.,). In subsequent processes, many high potential FD are produced, that can be converted into high added value polymers or high energy density biofuels. During ACD, an unavoidable polyfuranic byproduct is generated which is called humins. The family of FD is very large with varying chemical structures and diverse physicochemical properties. Accordingly, the associated risk profiles may largely vary. Hazardous Material (Haz-mat) classification systems such as GHS (CLP in the EU) and the UN TDG Model Regulations for transport of dangerous goods are one of the preliminary requirements for all chemicals for their appropriate classification, labelling, packaging, safe storage, and transportation. Considering the growing application routes of FD, it becomes important to notice the limited access to safety related information (safety data sheets available only for famous compounds such as HMF, furfural etc.,) in these internationally recognized haz-mat classification systems. However, these classifications do not necessarily provide information about the extent of risk involved when the chemical is used in any specific application. Factors such as thermal stability, speed of combustion, chemical incompatibilities, etc., can equally influence the safety profile of a compound, that are clearly out of the scope of any haz-mat classification system. Irrespective of the bio-based origin, FD has so far received inconsistent remarks concerning their toxicity profiles. With such inconsistencies, there is a fear that, a large family of FD may also follow extreme judgmental scenarios like ionic liquids, by ranking some compounds as extremely thermally stable, non-flammable, etc., Unless clarified, these messages could lead to misleading judgements while ranking the chemical based on its hazard rating. Safety is a key aspect in any sustainable biorefinery operation/facility, which is often underscored or neglected. To fill up these existing data gaps and to address ambiguities and discrepancies, the current study focuses on giving preliminary insights on safety assessment of FD and their potential targeted by-products. With the available information in the literature and obtained experimental results, physicochemical safety, environmental safety as well as (a scenario based) fire safety profiles of key FD, as well as side streams such as humins and levulinic acid, will be considered. With this, the study focuses on defining patterns and trends that gives coherent safety related information for existing and newly synthesized FD in the market for better functionality and sustainable applications.Keywords: furanics, humins, safety, thermal and fire hazard, toxicity
Procedia PDF Downloads 166243 An Integrated Approach to Cultural Heritage Management in the Indian Context
Authors: T. Lakshmi Priya
Abstract:
With the widening definition of heritage, the challenges of heritage management has become more complex . Today heritage not only includes significant monuments but comprises historic areas / sites, historic cities, cultural landscapes, and living heritage sites. There is a need for a comprehensive understanding of the values associated with these heritage resources, which will enable their protection and management. These diverse cultural resources are managed by multiple agencies having their own way of operating in the heritage sites. An Integrated approach to management of these cultural resources ensures its sustainability for the future generation. This paper outlines the importance of an integrated approach for the management and protection of complex heritage sites in India by examining four case studies. The methodology for this study is based on secondary research and primary surveys conducted during the preparation of the conservation management plansfor the various sites. The primary survey included basic documentation, inventorying, and community surveys. Red Fort located in the city of Delhi is one of the most significant forts built in 1639 by the Mughal Emperor Shahjahan. This fort is a national icon and stands testimony to the various historical events . It is on the ramparts of Red Fort that the national flag was unfurled on 15th August 1947, when India became independent, which continues even today. Management of this complex fort necessitated the need for an integrated approach, where in the needs of the official and non official stakeholders were addressed. The understanding of the inherent values and significance of this site was arrived through a systematic methodology of inventorying and mapping of information. Hampi, located in southern part of India, is a living heritage site inscribed in the World Heritage list in 1986. The site comprises of settlements, built heritage structures, traditional water systems, forest, agricultural fields and the remains of the metropolis of the 16th century Vijayanagar empire. As Hampi is a living heritage site having traditional systems of management and practices, the aim has been to include these practices in the current management so that there is continuity in belief, thought and practice. The existing national, regional and local planning instruments have been examined and the local concerns have been addressed.A comprehensive understanding of the site, achieved through an integrated model, is being translated to an action plan which safeguards the inherent values of the site. This paper also examines the case of the 20th century heritage building of National Archives of India, Delhi and protection of a 12th century Tomb of Sultan Ghari located in south Delhi. A comprehensive understanding of the site, lead to the delineation of the Archaeological Park of Sultan Ghari, in the current Master Plan for Delhi, for the protection of the tomb and the settlement around it. Through this study it is concluded that the approach of Integrated Conservation has enabled decision making that sustains the values of these complex heritage sites in Indian context.Keywords: conservation, integrated, management, approach
Procedia PDF Downloads 89242 Self-Healing Coatings and Electrospun Fibers
Authors: M. Grandcolas, N. Rival, H. Bu, S. Jahren, R. Schmid, H. Johnsen
Abstract:
The concept of an autonomic self-healing material, where initiation of repair is integrated to the material, is now being considered for engineering applications and is a hot topic in the literature. Among several concepts/techniques, two are most interesting: i) Capsules: Integration of microcapsules in or at the surface of coatings or fibre-like structures has recently gained much attention. Upon damage-induced cracking, the microcapsules are broken by the propagating crack fronts resulting in a release of an active chemical (healing agent) by capillary action, subsequently repairing and avoiding further crack growth. ii) Self-healing polymers: Interestingly, the introduction of dynamic covalent bonds into polymer networks has also recently been used as a powerful approach towards the design of various intrinsically self-healing polymer systems. The idea behind this is to reconnect the chemical crosslinks which are broken when a material fractures, restoring the integrity of the material and thereby prolonging its lifetime. We propose here to integrate both self-healing concepts (capsules, self-healing polymers) in electrospun fibres and coatings. Different capsule preparation approaches have been investigated in SINTEF. The most advanced method to produce capsules is based on emulsification to create a water-in-oil emulsion before polymerisation. The healing agent is a polyurethane-based dispersion that was encapsulated in shell materials consisting of urea-benzaldehyde resins. Results showed the successful preparation of microcapsules and release of the agent when capsules break. Since capsules are produced in water-in-oil systems we mainly investigated organic solvent based coatings while a major challenge resides in the incorporation of capsules into water-based coatings. We also focused on developing more robust microcapsules to prevent premature rupture of the capsules. The capsules have been characterized in terms of size, and encapsulation and release might be visualized by incorporating fluorescent dyes and examine the capsules by microscopy techniques. Alternatively, electrospinning is an innovative technique that has attracted enormous attention due to unique properties of the produced nano-to-micro fibers, ease of fabrication and functionalization, and versatility in controlling parameters. Especially roll-to-roll electrospinning is a unique method which has been used in industry to produce nanofibers continuously. Electrospun nanofibers can usually reach a diameter down to 100 nm, depending on the polymer used, which is of interest for the concept with self-healing polymer systems. In this work, we proved the feasibility of fabrication of POSS-based (POSS: polyhedral oligomeric silsesquioxanes, tradename FunzioNano™) nanofibers via electrospinning. Two different formulations based on aqueous or organic solvents have shown nanofibres with a diameter between 200 – 450nm with low defects. The addition of FunzioNano™ in the polymer blend also showed enhanced properties in term of wettability, promising for e.g. membrane technology. The self-healing polymer systems developed are here POSS-based materials synthesized to develop dynamic soft brushes.Keywords: capsules, coatings, electrospinning, fibers
Procedia PDF Downloads 262241 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 272240 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD
Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer
Abstract:
Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film
Procedia PDF Downloads 295239 Polar Bears in Antarctica: An Analysis of Treaty Barriers
Authors: Madison Hall
Abstract:
The Assisted Colonization of Polar Bears to Antarctica requires a careful analysis of treaties to understand existing legal barriers to Ursus maritimus transport and movement. An absence of land-based migration routes prevent polar bears from accessing southern polar regions on their own. This lack of access is compounded by current treaties which limit human intervention and assistance to ford these physical and legal barriers. In a time of massive planetary extinctions, Assisted Colonization posits that certain endangered species may be prime candidates for relocation to hospitable environments to which they have never previously had access. By analyzing existing treaties, this paper will examine how polar bears are limited in movement by humankind’s legal barriers. International treaties may be considered codified reflections of anthropocentric values of the best knowledge and understanding of an identified problem at a set point in time, as understood through the human lens. Even as human social values and scientific insights evolve, so too must treaties evolve which specify legal frameworks and structures impacting keystone species and related biomes. Due to costs and other myriad difficulties, only a very select number of species will be given this opportunity. While some species move into new regions and are then deemed invasive, Assisted Colonization considers that some assistance may be mandated due to the nature of humankind’s role in climate change. This moral question and ethical imperative against the backdrop of escalating climate impacts, drives the question forward; what is the potential for successfully relocating a select handful of charismatic and ecologically important life forms? Is it possible to reimagine a different, but balanced Antarctic ecosystem? Listed as a threatened species under the U.S. Endangered Species Act, a result of the ongoing loss of critical habitat by melting sea ice, polar bears have limited options for long term survival in the wild. Our current regime for safeguarding animals facing extinction frequently utilizes zoos and their breeding programs, to keep alive the genetic diversity of the species until some future time when reintroduction, somewhere, may be attempted. By exploring the potential for polar bears to be relocated to Antarctica, we must analyze the complex ethical, legal, political, financial, and biological realms, which are the backdrop to framing all questions in this arena. Can we do it? Should we do it? By utilizing an environmental ethics perspective, we propose that the Ecological Commons of the Arctic and Antarctic should not be viewed solely through the lens of human resource management needs. From this perspective, polar bears do not need our permission, they need our assistance. Antarctica therefore represents a second, if imperfect chance, to buy time for polar bears, in a world where polar regimes, not yet fully understood, are themselves quickly changing as a result of climate change.Keywords: polar bear, climate change, environmental ethics, Arctic, Antarctica, assisted colonization, treaty
Procedia PDF Downloads 421238 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method
Authors: Zulkifli, I. W. Eltara, Anawati
Abstract:
Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.Keywords: superoleophobic, nanocellulose, aerogel, sol-gel
Procedia PDF Downloads 352237 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion
Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong
Abstract:
The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor
Procedia PDF Downloads 233236 Chemical Study and Cytotoxic Activity of Extracts from Erythroxylum Genus against HeLa Cells
Authors: Richele P. Severino, Maria M. F. Alchaar, Lorena R. F. De Sousa, Patrik S. Vital, Ana G. Silva, Rosy I. M. A. Ribeiro
Abstract:
Recognized as a global biodiversity hotspot, the Cerrado (Brazil) presents an extreme abundance of endemic species and it is considered to be one of the biologically richest tropical savanna regions in the world. Erythroxylum genus is found in Cerrado and chemically is characterized by the presence of tropane alkaloids, among them cocaine, a natural alkaloid produced by Erythroxylum coca Lam., which was used as a local anesthetic in small surgeries. However, cocaine gained notoriety due to its psychoactive activity in the Central Nervous System (CNS), becoming one of the major problems of public health today. Some species of Erythroxylum are referred to in the literature as having pharmacological potential, which provide alkaloids, terpenoids, and flavonoids. E. vacciniifolium Mart., commonly known as 'catuaba', is used as a central nervous system stimulant and has aphrodisiac properties and E. pelleterianum A. St.-Hil. in the treatment of stomach pains. Already E. myrsinites Mart. and E. suberosum A. St.-Hil. are used in the tannery industry. Species of Erythroxylum are also used in folk medicine for various diseases, against diabetes, antiviral, fungicidal, cytotoxicity, among others. The Cerrado is recognized as the richer savannah in the world in biodiversity but little explored from the chemical view. In our on-going study of the chemistry of Erythroxylum genus, we have investigated four specimens collected in central Cerrado of Brazil: E. campestre (EC), E. deciduum (ED), E. suberosum (ES) and E. tortuosum (ET). The cytotoxic activity of extracts was evaluated using HeLa cells, in vitro assays. The chemical investigation was performed preparing the extracts using n-hexane (H), dichloromethane (D), ethyl acetate (E) and methanol (M). The cells were treated with increasing concentrations of extracts (50, 75 and 100 μg/mL) diluted in DMSO (1%) and DMEM (0.5% FBS and 1% P/S). The IC₅₀ values were determined measured spectrophotometrically at 570 nm, after incubation of HeLa cell line for 48 hours using the MTT (SIGMA M5655), and calculated by nonlinear regression analysis using GraphPad Prism software. All the assays were done in triplicate and repeated at least two times. The cytotoxic assays showed some promising results with IC₅₀ values less than 100 μg/mL (ETD = 38.5 μg/mL; ETM = 92.3 μg/mL; ESM = 67.8 μg/mL; ECD = 24.0 μg/mL; ECM = 32.9; EDA = 44.2 μg/mL). The chemical profile study of ethyl acetate (E) and methanolic (M) extracts of E. tortuosum leaves was performed by LC-MS, and the structures of the compounds were determined by analysis of ¹H, HSQC and HMBC spectra, and confirmed by comparison with the literature data. The investigation led to six substances: α-amyrin, β-amyrin, campesterol, stigmastan-3,5-diene, β-sitosterol and 7,4’-di-O-methylquercetin-3-O-β-rutinoside, with flavonoid the major compound of extracts. By alkaline extraction of the methanolic extract, it was possible to identify three alkaloids: tropacocaine, cocaine and 6-methoxy-8-methyl-8-azabicyclo[3.2.1]octan-3-ol. The results obtained are important for the chemical knowledge of the Cerrado biodiversity and brought a contribution to the chemistry of Erythroxylum genus.Keywords: cytotoxicity, Erythroxylum, chemical profile, secondary metabolites
Procedia PDF Downloads 147235 Sustainability and Smart Cities Planning in Contrast with City Humanity. Human Scale and City Soul (Neighbourhood Scale)
Authors: Ghadir Hummeid
Abstract:
Undoubtedly, our world is leading all the purposes and efforts to achieve sustainable development in life in all respects. Sustainability has been regarded as a solution to many challenges of our world today, materiality and immateriality. With the new consequences and challenges our world today, such as global climate change, the use of non-renewable resources, environmental pollution, the decreasing of urban health, the urban areas’ aging, the highly increasing migrations into urban areas linked to many consequences such as highly infrastructure density, social segregation. All of that required new forms of governance, new urban policies, and more efficient efforts and urban applications. Based on the fact that cities are the core of life and it is a fundamental life axis, their development can increase or decrease the life quality of their inhabitants. Architects and planners see themselves today in the need to create new approaches and new sustainable policies to develop urban areas to correspond with the physical and non-physical transformations that cities are nowadays experiencing. To enhance people's lives and provide for their needs in this present without compromising the needs and lives of future generations. The application of sustainability has become an inescapable part of the development and projections of cities' planning. Yet its definition has been indefinable due to the plurality and difference of its applications. As the conceptualizations of technology are arising and have dominated all life aspects today, from smart citizens and smart life rhythms to smart production and smart structures to smart frameworks, it has influenced the sustainability applications as well in the planning and urbanization of cities. The term "smart city" emerged from this influence as one of the possible key solutions to sustainability. The term “smart city” has various perspectives of applications and definitions in the literature and in urban applications. However, after the observation of smart city applications in current cities, this paper defined the smart city as an urban environment that is controlled by technologies yet lacks the physical architectural representation of this smartness as the current smart applications are mostly obscured from the public as they are applied now on a diminutive scale and highly integrated into the built environment. Regardless of the importance of these technologies in improving the quality of people's lives and in facing cities' challenges, it is important not to neglect their architectural and urban presentations will affect the shaping and development of city neighborhoods. By investigating the concept of smart cities and exploring its potential applications on a neighbourhood scale, this paper aims to shed light on understanding the challenges faced by cities and exploring innovative solutions such as smart city applications in urban mobility and how they affect the different aspects of communities. The paper aims to shape better articulations of smart neighborhoods’ morphologies on the social, architectural, functional, and material levels. To understand how to create more sustainable and liveable future approaches to developing urban environments inside cities. The findings of this paper will contribute to ongoing discussions and efforts in achieving sustainable urban development.Keywords: sustainability, urban development, smart city, resilience, sense of belonging
Procedia PDF Downloads 80234 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 28233 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 160232 Exploration of Barriers and Challenges to Innovation Process for SMEs: Possibilities to Promote Cooperation Between Scientific and Business Institutions to Address it
Authors: Indre Brazauskaite, Vilte Auruskeviciene
Abstract:
Significance of the study is outlined through current strategic management challenges faced by SMEs. First, innovation is recognized as competitive advantage in the market, having ever changing market conditions. It is of constant interest from both practitioners and academics to capture and capitalize on business opportunities or mitigate the foreseen risks. Secondly, it is recognized that integrated system is needed for proper implementation of innovation process, especially during the period of business incubation, associated with relatively high risks of new product failure. Finally, ability to successful commercialize innovations leads to tangible business results that allow to grow organizations further. This is particularly relevant to SMEs due to limited structures, resources, or capabilities. Cooperation between scientific and business institutions could be a tool of mutual interest to observe, address, and further develop innovations during the incubation period, which is the most demanding and challenging during the innovation process. Material aims to address the following problematics: i) indicate the major barriers and challenges in innovation process that SMEs are facing, ii) outline the possibilities for these barriers and challenges to be addressed by cooperation between scientific and business institutions. Basis for this research is stage-by-stage integrated innovation management process which presents existing challenges and needed aid in operational decision making. The stage-by-stage innovation management process exploration highlights relevant research opportunities that have high practical relevance in the field. It is expected to reveal the possibility for business incubation programs that could combine interest from both – practices and academia. Methodology. Scientific meta-analysis of to-date scientific literature that explores innovation process. Research model is built on the combination of stage-gate model and lean six sigma approach. It outlines the following steps: i) pre-incubation (discovery and screening), ii) incubation (scoping, planning, development, and testing), and iii) post-incubation (launch and commercialization) periods. Empirical quantitative research is conducted to address barriers and challenges related to innovation process among SMEs that limits innovations from successful launch and commercialization and allows to identify potential areas for cooperation between scientific and business institutions. Research sample, high level decision makers representing trading SMEs, are approached with structured survey based on the research model to investigate the challenges associated with each of the innovation management step. Expected findings. First, the current business challenges in the innovation process are revealed. It will outline strengths and weaknesses of innovation management practices and systems across SMEs. Secondly, it will present material for relevant business case investigation for scholars to serve as future research directions. It will contribute to a better understanding of quality innovation management systems. Third, it will contribute to the understanding the need for business incubation systems for mutual contribution from practices and academia. It can increase relevance and adaptation of business research.Keywords: cooperation between scientific and business institutions, innovation barriers and challenges, innovation measure, innovation process, SMEs
Procedia PDF Downloads 150231 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects
Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost
Abstract:
A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet
Procedia PDF Downloads 367230 Challenges, Responses and Governance in the Conservation of Forest and Wildlife: The Case of the Aravali Ranges, Delhi NCR
Authors: Shashi Mehta, Krishan Kumar Yadav
Abstract:
This paper presents an overview of issues pertaining to the conservation of the natural environment and factors affecting the coexistence of the forest, wildlife and people. As forests and wildlife together create the basis for economic, cultural and recreational spaces for overall well-being and life-support systems, the adverse impacts of increasing consumerism are only too evident. The IUCN predicts extinction of 41% of all amphibians and 26% of mammals. The major causes behind this threatened extinction are Deforestation, Dysfunctional governance, Climate Change, Pollution and Cataclysmic phenomena. Thus the intrinsic relationship between natural resources and wildlife needs to be understood in totality, not only for the eco-system but for humanity at large. To demonstrate this, forest areas in the Aravalis- the oldest mountain ranges of Asia—falling in the States of Haryana and Rajasthan, have been taken up for study. The Aravalis are characterized by extreme climatic conditions and dry deciduous forest cover on intermittent scattered hills. Extending across the districts of Gurgaon, Faridabad, Mewat, Mahendergarh, Rewari and Bhiwani, these ranges - with village common land on which the entire economy of the rural settlements depends - fall in the state of Haryana. Aravali ranges with diverse fauna and flora near Alwar town of state of Rajasthan also form part of NCR. Once, rich in biodiversity, the Aravalis played an important role in the sustainable co-existence of forest and people. However, with the advent of industrialization and unregulated urbanization, these ranges are facing deforestation, degradation and denudation. The causes are twofold, i.e. the need of the poor and the greed of the rich. People living in and around the Aravalis are mainly poor and eke out a living by rearing live-stock. With shrinking commons, they depend entirely upon these hills for grazing, fuel, NTFP, medicinal plants and even drinking water. But at the same time, the pressure of indiscriminate urbanization and industrialization in these hills fulfils the demands of the rich and powerful in collusion with Government agencies. The functionaries of federal and State Governments play largely a negative role supporting commercial interests. Additionally, planting of a non- indigenous species like prosopis juliflora across the ranges has resulted in the extinction of almost all the indigenous species. The wildlife in the area is also threatened because of the lack of safe corridors and suitable habitat. In this scenario, the participatory role of different stakeholders such as NGOs, civil society and local community in the management of forests becomes crucial not only for conservation but also for the economic wellbeing of the local people. Exclusion of villagers from protection and conservation efforts - be it designing, implementing or monitoring and evaluating could prove counterproductive. A strategy needs to be evolved, wherein Government agencies be made responsible by putting relevant legislation in place along with nurturing and promoting the traditional wisdom and ethics of local communities in the protection and conservation of forests and wild life in the Aravali ranges of States of Haryana and Rajasthan of the National Capital Region, Delhi.Keywords: deforestation, ecosystem, governance, urbanization
Procedia PDF Downloads 326229 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions
Authors: Guo Bingkun
Abstract:
With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.Keywords: urban housing, urban planning, housing prices, comparative study
Procedia PDF Downloads 50228 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 125227 Development and Evaluation of Economical Self-cleaning Cement
Authors: Anil Saini, Jatinder Kumar Ratan
Abstract:
Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination
Procedia PDF Downloads 171226 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust
Authors: A. V. Harutyunyan
Abstract:
The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).Keywords: dehydration, diamonds, hydrocarbons, serpentinites
Procedia PDF Downloads 341225 Case Report: Ocular Helminth – In Unusual Site (Lens)
Authors: Chandra Shekhar Majumder, Shamsul Haque, Khondaker Anower Hossain, Rafiqul Islam
Abstract:
Introduction: Ocular helminths are parasites that infect the eye or its adnexa. They can be either motile worms or sessile worms that form cysts. These parasites require two hosts for their life cycle, a definite host (usually a human) and an intermediate host (usually an insect). While there have been reports of ocular helminths infecting various structures of the eye, including the anterior chamber and subconjunctival space, there is no previous record of such a case involving the lens. Research Aim: The aim of this case report is to present a rare case of ocular helminth infection in the lens and to contribute to the understanding of this unusual site of infection. Methodology: This study is a case report, presenting the details and findings of an 80-year-old retired policeman who presented with severe pain, redness, and vision loss in the left eye. The examination revealed the presence of a thread-like helminth in the lens. The data for this case report were collected through clinical examination and medical records of the patient. The findings were described and presented in a descriptive manner. No statistical analysis was conducted. Case report: An 80-year-old retired policeman attended the OPD, Faridpur Medical College Hospital with the complaints of severe pain, redness and gross dimness of vision of the left eye for 5 days. He had a history of diabetes mellitus and hypertension for 3 years. On examination, L/E visual acuity was PL only, moderate ciliary congestion, KP 2+, cells 2+ and posterior synechia from 5 to 7 O’clock position was found. Lens was opaque. A thread like helminth was found under the anterior of the lens. The worm was moving and changing its position during examination. On examination of R/E, visual acuity was 6/36 unaided, 6/18 with pinhole. There was lental opacity. Slit-lamp and fundus examination were within normal limit. Patient was admitted in Faridpur Medical College Hospital. Diabetes mellitus was controlled with insulin. ICCE with PI was done on the same day of admission under depomedrol coverage. The helminth was recovered from the lens. It was thread like, about 5 to 6 mm in length, 1 mm in width and pinkish in colour. The patient followed up after 7 days, VA was HM, mild ciliary congestion, few KPs and cells were present. Media was hazy due to vitreous opacity. The worm was sent to the department of Parasitology, NIPSOM, Dhaka for identification. Theoretical Importance: This case report contributes to the existing literature on ocular helminth infections by reporting a unique case involving the lens. It highlights the need for further research to understand the mechanism of entry of helminths in the lens. Conclusion: To the best of our knowledge, this is the first reported case of ocular helminth infection in the lens. The presence of the helminth in the lens raises interesting questions regarding its pathogenesis and entry mechanism. Further study and research are needed to explore these aspects. Ophthalmologists and parasitologists should be aware of the possibility of ocular helminth infections in unusual sites like the lens.Keywords: helminth, lens, ocular, unusual
Procedia PDF Downloads 45224 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 128223 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 20