Search results for: dynamic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7014

Search results for: dynamic algorithm

144 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 223
143 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 68
142 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size

Authors: Natalie Hoyer

Abstract:

Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.

Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests

Procedia PDF Downloads 0
141 Exposing The Invisible

Authors: Kimberley Adamek

Abstract:

According to the Council on Tall Buildings, there has been a rapid increase in the construction of tall or “megatall” buildings over the past two decades. Simultaneously, the New England Journal of Medicine has reported that there has been a steady increase in climate related natural disasters since the 1970s; the eastern expansion of the USA's infamous Tornado Alley being just one of many current issues. In the future, this could mean that tall buildings, which already guide high speed winds down to pedestrian levels would have to withstand stronger forces and protect pedestrians in more extreme ways. Although many projects are required to be verified within wind tunnels and a handful of cities such as San Francisco have included wind testing within building code standards, there are still many examples where wind is only considered for basic loading. This typically results in and an increase of structural expense and unwanted mitigation strategies that are proposed late within a project. When building cities, architects rarely consider how each building alters the invisible patterns of wind and how these alterations effect other areas in different ways later on. It is not until these forces move, overpower and even destroy cities that people take notice. For example, towers have caused winds to blow objects into people (Walkie-Talkie Tower, Leeds, England), cause building parts to vibrate and produce loud humming noises (Beetham Tower, Manchester), caused wind tunnels in streets as well as many other issues. Alternatively, there exist towers which have used their form to naturally draw in air and ventilate entire facilities in order to eliminate the needs for costly HVAC systems (The Met, Thailand) and used their form to increase wind speeds to generate electricity (Bahrain Tower, Dubai). Wind and weather exist and effect all parts of the world in ways such as: Science, health, war, infrastructure, catastrophes, tourism, shopping, media and materials. Working in partnership with a leading wind engineering company RWDI, a series of tests, images and animations documenting discovered interactions of different building forms with wind will be collected to emphasize the possibilities for wind use to architects. A site within San Francisco (due to its increasing tower development, consistently wind conditions and existing strict wind comfort criteria) will host a final design. Iterations of this design will be tested within the wind tunnel and computational fluid dynamic systems which will expose, utilize and manipulate wind flows to create new forms, technologies and experiences. Ultimately, this thesis aims to question the amount which the environment is allowed to permeate building enclosures, uncover new programmatic possibilities for wind in buildings, and push the boundaries of working with the wind to ensure the development and safety of future cities. This investigation will improve and expand upon the traditional understanding of wind in order to give architects, wind engineers as well as the general public the ability to broaden their scope in order to productively utilize this living phenomenon that everyone constantly feels but cannot see.

Keywords: wind engineering, climate, visualization, architectural aerodynamics

Procedia PDF Downloads 338
140 Mesenchymal Stem Cells (MSC)-Derived Exosomes Could Alleviate Neuronal Damage and Neuroinflammation in Alzheimer’s Disease (AD) as Potential Therapy-Carrier Dual Roles

Authors: Huan Peng, Chenye Zeng, Zhao Wang

Abstract:

Alzheimer’s disease (AD) is an age-related neurodegenerative disease that is a leading cause of dementia syndromes and has become a huge burden on society and families. The main pathological features of AD involve excessive deposition of β-amyloid (Aβ) and Tau proteins in the brain, resulting in loss of neurons, expansion of neuroinflammation, and cognitive dysfunction in patients. Researchers have found effective drugs to clear the brain of error-accumulating proteins or to slow the loss of neurons, but their direct administration has key bottlenecks such as single-drug limitation, rapid blood clearance rate, impenetrable blood-brain barrier (BBB), and poor ability to target tissues and cells. Therefore, we are committed to seeking a suitable and efficient delivery system. Inspired by the possibility that exosomes may be involved in the secretion and transport mechanism of many signaling molecules or proteins in the brain, exosomes have attracted extensive attention as natural nanoscale drug carriers. We selected exosomes derived from bone marrow mesenchymal stem cells (MSC-EXO) with low immunogenicity and exosomes derived from hippocampal neurons (HT22-EXO) that may have excellent homing ability to overcome the deficiencies of oral or injectable pathways and bypass the BBB through nasal administration and evaluated their delivery ability and effect on AD. First, MSC-EXO and HT22 cells were isolated and cultured, and MSCs were identified by microimaging and flow cytometry. Then MSC-EXO and HT22-EXO were obtained by gradient centrifugation and qEV SEC separation column, and a series of physicochemical characterization were performed by transmission electron microscope, western blot, nanoparticle tracking analysis and dynamic light scattering. Next, exosomes labeled with lipophilic fluorescent dye were administered to WT mice and APP/PS1 mice to obtain fluorescence images of various organs at different times. Finally, APP/PS1 mice were administered intranasally with two exosomes 20 times over 40 days and 20 μL each time. Behavioral analysis and pathological section analysis of the hippocampus were performed after the experiment. The results showed that MSC-EXO and HT22-EXO were successfully isolated and characterized, and they had good biocompatibility. MSC-EXO showed excellent brain enrichment in APP/PS1 mice after intranasal administration, could improve the neuronal damage and reduce inflammation levels in the hippocampus of APP/PS1 mice, and the improvement effect was significantly better than HT22-EXO. However, intranasal administration of the two exosomes did not cause depression and anxious-like phenotypes in APP/PS1 mice, nor significantly improved the short-term or spatial learning and memory ability of APP/PS1 mice, and had no significant effect on the content of Aβ plaques in the hippocampus, which also meant that MSC-EXO could use their own advantages in combination with other drugs to clear Aβ plaques. The possibility of realizing highly effective non-invasive synergistic treatment for AD provides new strategies and ideas for clinical research.

Keywords: Alzheimer’s disease, exosomes derived from mesenchymal stem cell, intranasal administration, therapy-carrier dual roles

Procedia PDF Downloads 23
139 Flourishing in Marriage among Arab Couples in Israel: The Impact of Capitalization Support and Accommodation on Positive and Negative Affect

Authors: Niveen Hassan-Abbas, Tammie Ronen-Rosenbaum

Abstract:

Background and purpose: 'Flourishing in marriage' is a concept refers to married individuals’ high positivity ratio regarding their marriage, namely greater reported positive than negative emotions. The study proposes a different approach to marriage which emphasizes the place of the individual himself as largely responsible for his personal flourishing within marriage. Accordingly, the individual's desire to preserve and strengthen his marriage largely determines the marital behavior in a way that will contribute to his marriage success (Actor Effect), regardless the contribution of his or her partner to his marriage success (Partner Effect). Another assumption was that flourishing in marriage could be achieved by two separate processes, where capitalization support increases the positive marriage's evaluations and accommodation decreases the negative one. A theoretical model was constructed, whereby individuals who were committed to their marriage were hypothesized as employing self-control skills by way of two dynamic processes. First, individual’s higher degree of 'capitalization supportive responses' - supportive responses to the partner's sharing of positive personal experiences - was hypothesized as increasing one’s positive evaluations of marriage and thereby one’s positivity ratio. Second, individual’s higher degree of 'accommodation' responses - the ability during conflict situations to control the impulse to respond destructively and instead to respond constructively - was hypothesized as decreasing one’s negative evaluations of marriage and thereby increasing one’s positivity ratio. Methods: Participants were 156 heterosexual Arab couples from different regions of Israel. The mean period of marriage was 10.19 (SD=7.83), ages were 31.53 years for women (SD=8.12) and 36.80 years for men (SD=8.07). Years of education were 13.87 for women (SD=2.84) and 13.23 years for men (SD=3.45). Each participant completed seven questionnaires: socio-demographic, self-control skills, commitment, capitalization support, accommodation, marital quality, positive and negative affect. Using statistical analyses adapted to dyadic research design, firstly descriptive statistics were calculated and preliminary tests were performed. Next, dyadic model based on the Actor-Partner Interdependence Model (APIM) were tested using structural equation modeling (SEM). Results: The assumption according to which flourishing in marriage can be achieved by two processes was confirmed. All of the Actor Effect hypotheses were confirmed. Participants with higher self-control used more capitalization support and accommodation responses. Among husbands, unlike wives, these correlations were stronger when the individual's commitment level was higher. More capitalization supportive responses were found to increase positive evaluations of marriage, and greater spousal accommodation was found to decrease negative evaluations of marriage. High positive evaluations and low negative evaluations were found to increase positivity ratio. Not according to expectation, four partner effect paths were found significant. Conclusions and Implications: The present findings coincide with the positive psychology approach that emphasizes human strengths. The uniqueness of this study is its proposal that individuals are largely responsible for their personal flourishing in marriage. This study demonstrated that marital flourishing can be achieved by two processes, where capitalization increases the positive and accommodation decreases the negative. Practical implications include the need to construct interventions that enhance self-control skills for employment of capitalizing responsiveness and accommodation processes.

Keywords: accommodation, capitalization support, commitment, flourishing in marriage, positivity ratio, self-control skills

Procedia PDF Downloads 137
138 Horizontal Stress Magnitudes Using Poroelastic Model in Upper Assam Basin, India

Authors: Jenifer Alam, Rima Chatterjee

Abstract:

Upper Assam sedimentary basin is one of the oldest commercially producing basins of India. Being in a tectonically active zone, estimation of tectonic strain and stress magnitudes has vast application in hydrocarbon exploration and exploitation. This East North East –West South West trending shelf-slope basin encompasses the Bramhaputra valley extending from Mikir Hills in the southwest to the Naga foothills in the northeast. Assam Shelf lying between the Main Boundary Thrust (MBT) and Naga Thrust area is comparatively free from thrust tectonics and depicts normal faulting mechanism. The study area is bounded by the MBT and Main Central Thrust in the northwest. The Belt of Schuppen in the southeast, is bordered by Naga and Disang thrust marking the lower limit of the study area. The entire Assam basin shows low-level seismicity compared to other regions of northeast India. Pore pressure (PP), vertical stress magnitude (SV) and horizontal stress magnitudes have been estimated from two wells - N1 and T1 located in Upper Assam. N1 is located in the Assam gap below the Bramhaputra river while T1, lies in the Belt of Schuppen. N1 penetrates geological formations from top Alluvial through Dhekiajuli, Girujan, Tipam, Barail, Kopili, Sylhet and Langpur to the granitic basement while T1 in trusted zone crosses through Girujan Suprathrust, Tipam Suprathrust, Barail Suprathrust to reach Naga Thrust. Normal compaction trend is drawn through shale points through both wells for estimation of PP using the conventional Eaton sonic equation with an exponent of 1.0 which is validated with Modular Dynamic Tester and mud weight. Observed pore pressure gradient ranges from 10.3 MPa/km to 11.1 MPa/km. The SV has a gradient from 22.20 to 23.80 MPa/km. Minimum and maximum horizontal principal stress (Sh and SH) magnitudes under isotropic conditions are determined using poroelastic model. This approach determines biaxial tectonic strain utilizing static Young’s Modulus, Poisson’s Ratio, SV, PP, leak off test (LOT) and SH derived from breakouts using prior information on unconfined compressive strength. Breakout derived SH information is used for obtaining tectonic strain due to lack of measured SH data from minifrac or hydrofracturing. Tectonic strain varies from 0.00055 to 0.00096 along x direction and from -0.0010 to 0.00042 along y direction. After obtaining tectonic strains at each well, the principal horizontal stress magnitudes are calculated from linear poroelastic model. The magnitude of Sh and SH gradient in normal faulting region are 12.5 and 16.0 MPa/km while in thrust faulted region the gradients are 17.4 and 20.2 MPa/km respectively. Model predicted Sh and SH matches well with the LOT data and breakout derived SH data in both wells. It is observed from this study that the stresses SV>SH>Sh prevailing in the shelf region while near the Naga foothills the regime changes to SH≈SV>Sh area corresponds to normal faulting regime. Hence this model is a reliable tool for predicting stress magnitudes from well logs under active tectonic regime in Upper Assam Basin.

Keywords: Eaton, strain, stress, poroelastic model

Procedia PDF Downloads 182
137 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 296
136 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 26
135 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application

Authors: Ramesh P., Aby Joseph

Abstract:

Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.

Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling

Procedia PDF Downloads 110
134 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 287
133 Ways for University to Conduct Research Evaluation: Based on National Research University Higher School of Economics Example

Authors: Svetlana Petrikova, Alexander Yu Kostinskiy

Abstract:

Management of research evaluation in the Higher School of Economics (HSE) originates from the HSE Academic Fund created in 2004 to facilitate and support academic research and presents its results to international academic community. As the means to inspire the applicants, science projects went through competitive selection process evaluated by the group of experts. Drastic development of HSE, quantity of applied projects for each Academic Fund competition and the need to coordinate the conduct of expert evaluation resulted in founding of the Office for Research Evaluation in 2013. The Office’s primary objective is management of research evaluation of science projects. The standards to conduct the evaluation are defined as follows: - The exercise of the process approach, the unification of the functioning of department. - The uniformity of regulatory, organizational and methodological framework. - The development of proper on-line evaluation system. - The broad involvement of external Russian and international experts, the renouncement of the usage of own employees. - The development of an algorithm to make a correspondence between experts and science projects. - The methodical usage of opened/closed international and Russian databases to extend the expert database. - The transparency of evaluation results – free access to assessment while keeping experts confidentiality. The management of research evaluation of projects is based on the sole standard, organization and financing. The standard way of conducting research evaluation at HSE is based upon Regulations on basic principles for research evaluation at HSE. These Regulations have been developed from the moment of establishment of the Office for Research Evaluation and are based on conventional corporate standards for regulatory document management. The management system of research evaluation is implemented on the process approach basis. Process approach means deployment of work as a process, which is the aggregation of interrelated and interacting activities processing inputs into outputs. Inputs are firstly client asking for the assessment to be conducted, defining the conditions for organizing and carrying of the assessment and secondly the applicant with proper for the competition application; output is assessment given to the client. While exercising process approach to clarify interrelation and interacting main parties or subjects of the assessment are determined and the way for interaction between them forms up. Parties to expert assessment are: - Ordering Party – The department of the university taking the decision to subject a project to expert assessment; - Providing Party – The department of the university authorized to provide such assessment by the Ordering Party; - Performing Party – The legal and natural entities that have expertise in the area of research evaluation. Experts assess projects in accordance with criteria and states of expert opinions approved by the Ordering Party. Objects of assessment generally are applications or HSE competition project reports. Mainly assessments are deployed for internal needs, i.e. the most ordering parties are HSE branches and departments, but assessment can also be conducted for external clients. The financing of research evaluation at HSE is based on the established corporate culture and traditions of HSE.

Keywords: expert assessment, management of research evaluation, process approach, research evaluation

Procedia PDF Downloads 225
132 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique

Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich

Abstract:

In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.

Keywords: annular flow, disturbance waves, entrainment, flow development

Procedia PDF Downloads 228
131 Basic Education Curriculum in South- South Nigeria: Challenges and Opportunities of Quality Contents in the Second Language Learning

Authors: Catherine Alex Agbor

Abstract:

The modern Nigerian society is dynamic, divided in zones based on economic, political and educational resources often shared across the zones. The Six Geopolitical Zones in Nigeria is a major division in modern Nigeria, created during the regime of president Ibrahim Badamasi Babangida. They are North Central, North East, North West, South East, South South and South West. However, the zone used in this study is known as former South-Eastern State of Akwa-Ibom State and Cross-River State; former Rivers State of Bayelsa State and Rivers State; and former Mid-Western Region, Nigeria of Delta State and Edo State. Many reforms have taken place overtime, particularly in the education sector. Education is constantly presenting new ideas and innovative approaches which act to facilitate the rapid exchange of knowledge and provide quality basic education for learners. The Federal Government of Nigeria in accordance with its National Council on Education directed the Nigerian Educational Research and Development Council to restructure its basic education curriculum with the hope to enable the nation meet national and global developmental goals. One of the goals of the 9-year Basic Education Programme is developing in the entire citizenry a strong consciousness for education and a strong commitment to its vigorous promotion. Another is ensuring the acquisition of appropriate levels of literacy, numeracy, manipulative, communicative and life-skills as well as the ethical, moral and civic values for laying a solid foundation for lifelong learning. Therefore, this article at the introductory stage is aimed to describe some key issues in Nigeria’s experience in the basic education curriculum. In this study, particular attention is paid to this very recent educational policy of the Nigerian government known as Universal Basic Education, its challenges and what can be done to make the policy achieve its desired objectives. It progresses to analyze modern requirements for second language teaching; and presents the challenges of second language teaching in Nigeria. Finally, it reports a study which investigated special efforts for appropriate achievement of quality education in language classroom in the south-south zone of Nigeria. One fundamental research question was posed on what educational practices can contribute to current understanding of the structure of language curriculum. More explicitly, the study was designed to analyze the extent to which quality content contributes to current understanding of the structure of school curriculum in the zone. Otherwise stated, it investigated how student-centred educational practices impact on their learning of French language. One hundred and eighty (180) participants (teachers) were purposefully sampled for the study. Qualitative technique was used to elicit information from participants. The qualitative method used was Focus Group Discussion (FGD). Participants were divided into six groups comprising of 30 teachers from each zone. Group discussions were based mainly on curriculum contents and practices. Information from participants revealed that the curriculum content, among others is inadequate and should be re-examined. Recommendations were proffered as a panacea to concrete implementation of the basic education in Nigeria.

Keywords: basic education, quality contents, second language, south-south states

Procedia PDF Downloads 204
130 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 63
129 Force Sensor for Robotic Graspers in Minimally Invasive Surgery

Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy

Abstract:

Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.

Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor

Procedia PDF Downloads 189
128 Application of 3D Apparel CAD for Costume Reproduction

Authors: Zi Y. Kang, Tracy D. Cassidy, Tom Cassidy

Abstract:

3D apparel CAD is one of the remarkable products in advanced technology which enables intuitive design, visualisation and evaluation of garments through stereoscopic drape simulation. The progressive improvements of 3D apparel CAD have led to the creation of more realistic clothing simulation which is used not only in design development but also in presentation, promotion and communication for fashion as well as other industries such as film, game and social network services. As a result, 3D clothing technology is becoming more ubiquitous in human culture and lives today. This study considers that such phenomenon implies that the technology has reached maturity and it is time to inspect the status of current technology and to explore its potential uses in ways to create cultural values to further move forward. For this reason, this study aims to generate virtual costumes as culturally significant objects using 3D apparel CAD and to assess its capability, applicability and attitudes of the audience towards clothing simulation through comparison with physical counterparts. Since the access to costume collection is often limited due to the conservative issues, the technology may make valuable contribution by democratization of culture and knowledge for museums and its audience. This study is expected to provide foundation knowledge for development of clothing technology and for expanding its boundary of practical uses. To prevent any potential damage, two replicas of the costumes in the 1860s and 1920s at the Museum of London were chosen as samples. Their structural, visual and physical characteristics were measured and collected using patterns, scanned images of fabrics and objective fabric measurements with scale, KES-F (Kawabata Evaluation System of Fabrics) and Titan. Commercial software, DC Suite 5.0 was utilised to create virtual costumes applying collected data and the following outcomes were produced for the evaluation: Images of virtual costumes and video clips showing static and dynamic simulation. Focus groups were arranged with fashion design students and the public for evaluation which exposed the outcomes together with physical samples, fabrics swatches and photographs. The similarities, application and acceptance of virtual costumes were estimated through discussion and a questionnaire. The findings show that the technology has the capability to produce realistic or plausible simulation but expression of some factors such as details and capability of light material requires improvements. While the use of virtual costumes was viewed as more interesting and futuristic replacements to physical objects by the public group, the fashion student group noted more differences in detail and preferred physical garments highlighting the absence of tangibility. However, the advantages and potential of virtual costumes as effective and useful visual references for educational and exhibitory purposes were underlined by both groups. Although 3D apparel CAD has sufficient capacity to assist garment design process, it has limits in identical replication and more study on accurate reproduction of details and drape is needed for its technical improvements. Nevertheless, the virtual costumes in this study demonstrated the possibility of the technology to contribute to cultural and knowledgeable value creation through its applicability and as an interesting way to offer 3D visual information.

Keywords: digital clothing technology, garment simulation, 3D Apparel CAD, virtual costume

Procedia PDF Downloads 186
127 Invasion of Scaevola sericea (Goodeniaceae) in Cuba: Invasive Dynamic and Density-Dependent Relationship with the Native Species Tournefortia gnaphalodes (Boraginaceae)

Authors: Jorge Ferro-Diaz, Lazaro Marquez-Llauger, Jose Alberto Camejo-Lamas, Lazaro Marquez-Govea

Abstract:

The invasion of Scaevola sericea Vahl (Goodeniaceae) in Cuba is a recent process, this exotic invasive species was reported for the first time, in the national territory, by 2008. S. sericea is native to the coasts around the Indian Ocean and western Pacific, common on sandy beaches; it has expanded rapidly around the planet by either natural or anthropic causes, mainly due to its use in hotel gardening. Cuba is highly vulnerable to the colonization of these species, mainly due to tropical hurricanes which have increased in the last decades; it also affects other native species such as Tournefortia gnaphalodes (L.) R. Br. (Boraginaceae) that show invasive manifestations because of the unbalanced state of demographic processes of littoral vegetation, which has been studied by authors during the last 10 years. The fast development of Cuban tourism has encouraged the use of exotic species in gardening that invade large sectors of sandy coasts. Taking into account the importance of assessing the impacts dimensions and adopting effective control measures, a monitoring program for the invasion of S. sericea in Cuba was undertaken. The program has been implemented since 2013 and the main objective was to identify invasive patterns and interactions with other native species of coastal vegetation. This experience also aimed to validate the design and propose a standardized monitoring protocol to be applied throughout the country. In the Cuban territory, 12 sites were chosen, where there were established 24 permanent plots of 100 m2; measurements were taken twice a year taking into consideration variables such as abundance, plant height, soil cover, flora and companion vegetation, density and frequency; other physical variables of the beaches were also measured. Similarly, for associated individuals of T. gnaphalodes, the same variables were measured. The results of these first four years allowed us to document patterns of S. sericea invasion, highlighting the use of adventitious roots to enhance their colonization, and to characterize demographic indicators, ecosystem affections, and interactions with native plants. A density-dependent relationship with T. gnaphalodes was documented, finding a controlling effect on S. sericea, so that a manipulation experiment was applied to evaluate possible management actions to be incorporated in the Plans of the protected areas involved. With these results, it was concluded, for the evaluated sites, that S. sericea has had an invasion dynamics ruled by effects of coastal dynamics, more intense in beaches with affectations to the native vegetation, and more controlled in beaches with more preserved vegetation. It was found that when S. sericea is established, the mechanism that most reinforces its invasion is the use of adventitious roots, used to expand the patches and colonize beach sectors. It was also found that when the density of T. gnaphalodes increases, it detains the expansion of S. sericea and reduces its colonization possibilities, behaving as a natural controller of its biological invasion. The results include a proposal of a new Monitoring Protocol for Scaevola sericea in Cuba, with the possibility of extending its implementation to other countries in the region.

Keywords: biological invasion, exotic invasive species, plant interactions, Scaevola sericea

Procedia PDF Downloads 195
126 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves

Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis

Abstract:

During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.

Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.

Procedia PDF Downloads 48
125 The Preliminary Exposition of Soil Biological Activity, Microbial Diversity and Morpho-Physiological Indexes of Cucumber under Interactive Effect of Allelopathic Garlic Stalk: A Short-Term Dynamic Response in Replanted Alkaline Soil

Authors: Ahmad Ali, Muhammad Imran Ghani, Haiyan Ding, Zhihui Cheng, Muhammad Iqbal

Abstract:

Background and Aims: In recent years, protected cultivation trend, especially in the northern parts of China, spread dynamically where production area, structure, and crops diversity have expanded gradually under plastic greenhouse vegetable cropping (PGVC) system. Under this growing system, continuous monoculture with excessive synthetic fertilizers inputs are common cultivation practices frequently adopted by commercial producers. Such long-term cumulative wild exercise year after year sponsor the continuous cropping obstacles in PGVC soil, which have greatly threatened the regional soil eco-sustainability and further impose the continuous assault on soil ecological diversity leading to the exhaustion of agriculture productivity. The aim of this study was to develop new allelopathic insights by exploiting available biological resources in the favor of sustainable PGVC to illuminate the continuous obstacle factors in plastic greenhouse. Method: A greenhouse study was executed under plastic tunnel located at the Horticulture Experimental Station of the College of Horticulture, Northwest A&F University, Yangling, Shaanxi Province, one of the prominent regions for intensive commercial PGVC in China. Post-harvest garlic residues (stalk, leaves) mechanically smashed, homogenized into powder size and incorporated at the ratio of 1:100; 3:100; 5:100 as a soil amendment in a replanted soil that have been used for continuous cucumber monoculture for 7 years (annually double cropping system in a greenhouse). Results: Incorporated C-rich garlic stalk significantly influenced the soil condition through various ways; organic matter decomposition and mineralization, moderately adjusted the soil pH, enhanced the soil nutrient availability, increased enzymatic activities, and promoted 20% more cucumber yield in short-time. Using Illumina MiSeq sequencing analysis of bacterial 16S rRNA and fungal 18S rDNA genes, the current study revealed that addition of garlic stalk/residue could also improve the microbial abundance and community composition in extensively exploited soil, and contributed in soil functionality, caused prosper changes in soil characteristics, reinforced to good crop yield. Conclusion: Our study provided evidence that addition of garlic stalk as soil fertility amendment is a feasible, cost-effective and efficient resource utilization way for renovation of degraded soil health, ameliorate soil quality components and improve ecological environment in short duration. Our study may provide a better scientific understanding for efficient crop residue management typically from allelopathic source.

Keywords: garlic stalk, microbial community dynamics, plant growth, soil amendment, soil-plant system

Procedia PDF Downloads 101
124 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 13
123 Assessment and Characterization of Dual-Hardening Adhesion Promoter for Self-Healing Mechanisms in Metal-Plastic Hybrid System

Authors: Anas Hallak, Latifa Seblini, Juergen Wilde

Abstract:

In mechatronics or sensor technology, plastic housings are used to protect sensitive components from harmful environmental influences, such as moisture, media, or reactive substances. Connections, preferably in the form of metallic lead-frame structures, through the housing wall are required for their electrical supply or control. In this system, an insufficient connection between the plastic component, e.g., Polyamide66, and the metal surface, e.g., copper, due to the incompatibility is dominating. As a result, leakage paths can occur along with the plastic-metal interface. Since adhesive bonding has been established as one of the most important joining processes and its use has expanded significantly, driven by the development of improved high-performance adhesives and bonding techniques, this technology has been involved in metal-plastic hybrid structures. In this study, an epoxy bonding agent from DELO (DUALBOND LT2266) has been used to improve the mechanical and chemical binding between the metal and the polymer. It is an adhesion promoter with two reaction stages. In these, the first stage provides fixation to the lead frame directly after the coating step, which can be done by UV-Exposure for a few seconds. In the second stage, the material will be thermally hardened during injection molding. To analyze the two reaction stages of the primer, dynamic DSC experiments were carried out and correlated with Fourier-transform infrared spectroscopy measurements. Furthermore, the number of crosslinking bonds formed in the system in each reaction stage has also been estimated by a rheological characterization. Those investigations have been performed with different times of UV exposure: 12, 96 s and in an industrial preferred temperature range from -20 to 175°C. The shear viscosity values of primer have been measured as a function of temperature and exposure times. For further interpretation, the storage modulus values have been calculated, and the so-called Booij–Palmen plot has been sketched. The next approach in this study is the self-healing mechanisms in the hydride system in which the primer should flow into micro-damage such as interface, cracks, inhibit them from growing, and close them. The ability of the primer to flow in and penetrate defined capillaries made in Ultramid was investigated. Holes with a diameter of 0.3 mm were produced in injection-molded A3EG7 plates with 4 mm thickness. A copper substrate coated with the DUALBOND was placed on the A3EG7 plate and pressed with a certain force. Metallographic analyses were carried out to verify the filling grade, which showed an almost 95% filling ratio of the capillaries. Finally, to estimate the self-healing mechanism in metal-plastic hybrid systems, characterizations have been done on a simple geometry with a metal inlay developed by the Institute of Polymer Technology in Friedrich-Alexander-University. The specimens have been modified with tungsten wire which was to be pulled out after the injection molding to create a micro-hole in the specimen at the interface between the primer and the polymer. The capability of the primer to heal those micro-cracks upon heating, pressing, and thermal aging has been characterized through metallographic analyses.

Keywords: hybrid structures, self-healing, thermoplastic housing, adhesive

Procedia PDF Downloads 161
122 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components

Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia

Abstract:

Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.

Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses

Procedia PDF Downloads 25
121 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations

Authors: Sini George

Abstract:

Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.

Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations

Procedia PDF Downloads 143
120 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices

Authors: Zhuang Yiwen

Abstract:

The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.

Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms

Procedia PDF Downloads 47
119 Urban Sprawl: A Case Study of Suryapet Town in Nalgonda District of Telangana State, a Geoinformatic Approach

Authors: Ashok Kumar Lonavath, V. Sathish Kumar

Abstract:

Urban sprawl is the uncontrolled and uncoordinated outgrowth of towns and cities. The process of urban sprawl can be described by change in pattern over time, like proportional increase in built-up surface to population leading to rapid urban spatial expansion. Significant economic and livelihood opportunities in the urban areas results in lack of basic amenities due to the unplanned growth The patterns, processes, dynamic causes and consequences of sprawl can be explored and designed with the help of spatial planning support system. In India context the urban area is defined as the population more than 5000, density more than 400 persons per sq. km and 75% of the population is involved in non-agricultural occupations. India’s urban population is increasing at the rate of 2.35% pa. The class I town’s population of India according to 2011 census is 18.8% that accounts for 60.4% of total unban population. Similarly in Erstwhile Andhra Pradesh it is 22.9% which accounts for 68.8% of total urban population. Suryapet town has historical recognition as ‘Gate Way of Telangana’ in the Indian State of Andhra Pradesh. The Municipality was constituted in 1952 as Grade-III, later upgraded into Grade-II in 1984 and to Grade-I in 1998. The area is 35 Sq.kms. Three major tanks located in three different directions and Musi River is flowing from a distance of 8 kms. The average ground water table is about 50m below ground. It is a fast growing town with a population of 1, 06,805 and 25,448 households. Density is 3051pp sq km, It is a Class I city as per population census. It secured the ISO 14001-2004 certificate for establishing and maintaining an environment-friendly system for solid waste disposal. It is the first municipality in the country to receive such a certificate. It won HUDCO award under environment management, award of appreciation and cash from Ministry of Housing and Poverty Elevation from Government of India and undivided Andhra Pradesh under UN Human Settlement Programme, Greentech Excellance award, Supreme Courts appreciation for solid waste management. Foreign delegates from different countries and also from various other states of India visited Suryapet municipality for study tour and training programs as part of their official visit Suryapet is located at 17°5’ North Latitude and 79°37’ East Longitude. The average elevation is 266m, annual mean temperature is 36°C and average rainfall is 821.0 mm. The people of this town are engaged in Commercial and agriculture activities hence the town has become a centre for marketing and stocking agricultural produce. It is also educational centre in this region. The present paper on urban sprawl is a theoretical framework to analyze the interaction of planning and governance on the extent of outgrowth and level of services. The GIS techniques, SOI Toposheet, satellite imageries and image analysis techniques are extensively used to explore the sprawl and measure the urban land-use. This paper concludes outlining the challenges in addressing urban sprawl while ensuring adequate level of services that planning and governance have to ensure towards achieving sustainable urbanization.

Keywords: remote sensing, GIS, urban sprawl, urbanization

Procedia PDF Downloads 198
118 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 28
117 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 246
116 Reviving the Past, Enhancing the Future: Preservation of Urban Heritage Connectivity as a Tool for Developing Liveability in Historical Cities in Jordan, Using as Salt City as a Case Study

Authors: Sahar Yousef, Chantelle Niblock, Gul Kacmaz

Abstract:

Salt City, in the context of Jordan’s heritage landscape, is a significant case to explore when it comes to the interaction between tangible and intangible qualities of liveable cities. Most city centers, including Jerash, Salt, Irbid, and Amman, are historical locations. Six of these extraordinary sites were designated UNESCO World Heritage Sites. Jordan is widely acknowledged as a developing country characterized by swift urbanization and unrestrained expansion that exacerbate the challenges associated with the preservation of historic urban areas. The aim of this study is to conduct an examination and analysis of the existing condition of heritage connectivity within heritage city centers. This includes outdoor staircases, pedestrian pathways, footpaths, and other public spaces. Case study-style analysis of the urban core of As-Salt is the focus of this investigation. Salt City is widely acknowledged for its substantial tangible and intangible cultural heritage and has been designated as ‘The Place of Tolerance and Urban Hospitality’ by UNESCO since 2021. Liveability in urban heritage, particularly in historic city centers, incorporates several factors that affect our well-being; its enhancement is a critical issue in contemporary society. The dynamic interaction between humans and historical materials, which serves as a vehicle for the expression of their identity and historical narrative, constitutes preservation that transcends simple conservation. This form of engagement enables people to appreciate the diversity of their heritage recognising their previous and planned futures. Heritage preservation is inextricably linked to a larger physical and emotional context; therefore, it is difficult to examine it in isolation. Urban environments, including roads, structures, and other infrastructure, are undergoing unprecedented physical design and construction requirements. Concurrently, heritage reinforces a sense of affiliation with a particular location or space and unifies individuals with their ancestry, thereby defining their identity. However, a considerable body of research has focused on the conservation of heritage buildings in a fragmented manner without considering their integration within a holistic urban context. Insufficient attention is given to the significance of the physical and social roles played by the heritage staircases and baths that serve as connectors between these valued historical buildings. In doing so, the research uses a methodology that is based on consensus. Given that liveability is considered a complex matter with several dimensions. The discussion starts by making initial observations on the physical context and societal norms inside the urban center while simultaneously establishing the definitions of liveability and connectivity and examining the key criteria associated with these concepts. Then, identify the key elements that contribute to liveable connectivity within the framework of urban heritage in Jordanian city centers. Some of the outcomes that will be discussed in the presentation are: (1) There is not enough connectivity between heritage buildings as can be seen, for example, between buildings in Jada and Qala'. (2) Most of the outdoor spaces suffer from physical issues that hinder their use by the public, like in Salalem. (3) Existing activities in the city center are not well attended because of lack of communication between the organisers and the citizens.

Keywords: connectivity, Jordan, liveability, salt city, tangible and intangible heritage, urban heritage

Procedia PDF Downloads 37
115 Rheological and Microstructural Characterization of Concentrated Emulsions Prepared by Fish Gelatin

Authors: Helen S. Joyner (Melito), Mohammad Anvari

Abstract:

Concentrated emulsions stabilized by proteins are systems of great importance in food, pharmaceutical and cosmetic products. Controlling emulsion rheology is critical for ensuring desired properties during formation, storage, and consumption of emulsion-based products. Studies on concentrated emulsions have focused on rheology of monodispersed systems. However, emulsions used for industrial applications are polydispersed in nature, and this polydispersity is regarded as an important parameter that also governs the rheology of the concentrated emulsions. Therefore, the objective of this study was to characterize rheological (small and large deformation behaviors) and microstructural properties of concentrated emulsions which were not truly monodispersed as usually encountered in food products such as margarines, mayonnaise, creams, spreads, and etc. The concentrated emulsions were prepared at different concentrations of fish gelatin (0.2, 0.4, 0.8% w/v in the whole emulsion system), oil-water ratio 80-20 (w/w), homogenization speed 10000 rpm, and 25oC. Confocal laser scanning microscopy (CLSM) was used to determine the microstructure of the emulsions. To prepare samples for CLSM analysis, FG solutions were stained by Fluorescein isothiocyanate dye. Emulsion viscosity profiles were determined using shear rate sweeps (0.01 to 100 1/s). The linear viscoelastic regions (LVRs) of the emulsions were determined using strain sweeps (0.01 to 100% strain) for each sample. Frequency sweeps were performed in the LVR (0.1% strain) from 0.6 to 100 rad/s. Large amplitude oscillatory shear (LAOS) testing was conducted by collecting raw waveform data at 0.05, 1, 10, and 100% strain at 4 different frequencies (0.5, 1, 10, and 100 rad/s). All measurements were performed in triplicate at 25oC. The CLSM results revealed that increased fish gelatin concentration resulted in more stable oil-in-water emulsions with homogeneous, finely dispersed oil droplets. Furthermore, the protein concentration had a significant effect on emulsion rheological properties. Apparent viscosity and dynamic moduli at small deformations increased with increasing fish gelatin concentration. These results were related to increased inter-droplet network connections caused by increased fish gelatin adsorption at the surface of oil droplets. Nevertheless, all samples showed shear-thinning and weak gel behaviors over shear rate and frequency sweeps, respectively. Lissajous plots, or plots of stress versus strain, and phase lag values were used to determine nonlinear behavior of the emulsions in LAOS testing. Greater distortion in the elliptical shape of the plots followed by higher phase lag values was observed at large strains and frequencies in all samples, indicating increased nonlinear behavior. Shifts from elastic-dominated to viscous dominated behavior were also observed. These shifts were attributed to damage to the sample microstructure (e.g. gel network disruption), which would lead to viscous-type behaviors such as permanent deformation and flow. Unlike the small deformation results, the LAOS behavior of the concentrated emulsions was not dependent on fish gelatin concentration. Systems with different microstructures showed similar nonlinear viscoelastic behaviors. The results of this study provided valuable information that can be used to incorporate concentrated emulsions in emulsion-based food formulations.

Keywords: concentrated emulsion, fish gelatin, microstructure, rheology

Procedia PDF Downloads 252