Search results for: particle size distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10601

Search results for: particle size distribution

1811 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula

Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed

Abstract:

The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.

Keywords: nomogram, planing hull, principal parameters, regression

Procedia PDF Downloads 391
1810 The Practice of Low Flow Anesthesia to Reduce Carbon Footprints Sustainability Project

Authors: Ahmed Eid, Amita Gupta

Abstract:

Abstract: Background: Background Medical gases are estimated to contribute to 5% of the carbon footprints produced by hospitals, Desflurane has the largest impact, but all increase significantly when used with N2O admixture. Climate Change Act 2008, we must reduce our carbon emission by 80% of the 1990 baseline by 2050.NHS carbon emissions have reduced by 18.5% (2007-2017). The NHS Long Term Plan has outlined measures to achieve this objective, including a 2% reduction by transforming anaesthetic practices. FGF is an important variable that determines the utilization of inhalational agents and can be tightly controlled by the anaesthetist. Aims and Objectives Environmental safety, Identification of areas of high N20 and different anaesthetic agents used across the St Helier operating theatres and consider improvising on the current practice. Methods: Data was collected from St Helier operating theatres and retrieved daily from Care Station 650 anaesthetic machines. 60 cases were included in the sample. Collected data (average flow rate, amount and type of agent used, duration of surgery, type of surgery, duration, and the total amount of Air, O2 and N2O used. AAGBI impact anaesthesia calculator was used to identify the amount of CO2 produced and also the cost per hour for every pt. Communication via reminder emails to staff emphasized the significance of low-flow anaesthesia and departmental meeting presentations aimed at heightening awareness of LFA, Distribution of AAGBI calculator QR codes in all theatres enables the calculation of volatile anaesthetic consumption and CO2e post each case, facilitating informed environmental impact assessment. Results: A significant reduction in the flow rate use in the 2nd sample was observed, flow rate usage between 0-1L was 60% which means a great reduction of the consumption of volatile anaesthetics and also Co2e. By using LFA we can save money but most importantly we can make our lives much greener and save the planet.

Keywords: low flow anesthesia, sustainability project, N₂0, Co2e

Procedia PDF Downloads 56
1809 A Correlational Study between Sexual Awareness, Behaviour and Sources of Sexual Knowledge among Youth in Context of Bihar

Authors: Kanika Naresh Singh, Uday Shankar

Abstract:

Background: Human behaviours are influenced by drives. Sexual drive is one of them. Education regarding sexual behaviour plays a great role in shaping one’s attitude towards it. These days after attaining the age of puberty, adolescents are confused and feel shy to talk about it. In order to get information, they refer to various types of sources and these sources play a greater role in spreading awareness in the mass adolescent population. Sometimes it also leads to the building of myths and misconceptions. Due to increasing incidences of HIV/AIDS, RTIs/STIs and teenage pregnancies, there is a rising need to impart sex education. Aim: The aim of this research was to study the level of sexual awareness among the youth of Bihar and also study their sexual behaviour and sources of influence. It also aims to study the correlation between sexual awareness, behaviour and sources of sexual knowledge among youth in Bihar. Methods: The sample size for the project was 50 youth consisting of both boys and girls, in between the age group of 18 to 23 years from urban and semi-urban areas. The purposive sampling method was used in the research. The tools used were the Sexual Awareness Questionnaire and Sexual Behavior and Sources of Influence (SBSI) scale. The sexual Awareness Questionnaire was developed by Snell, having 35 items. A socio-demographic data sheet was also used. Results: The youth had poor sexual awareness. Internet and Friends were found to be the major source for gathering information. The youth of Bihar were less inclined towards resolving their doubts with their parents. There was a positive correlation between sexual awareness, behaviour and sources of knowledge. Conclusion: The youth of Bihar has poor sexual knowledge. Internet and Friends are major sources of information. Sex Education should be promoted as suggested by various institutions like World Health Organization United Nations. Psychiatrists and psychologists have a key leadership role in introducing these potentially emotionally challenging issues to the youth with consideration of psychosocial and cultural factors.

Keywords: sexual awareness, sexual behavior, sources of influence, youths, Bihar, India

Procedia PDF Downloads 130
1808 Identification of microRNAs in Early and Late Onset of Parkinson’s Disease Patient

Authors: Ahmad Rasyadan Arshad, A. Rahman A. Jamal, N. Mohamed Ibrahim, Nor Azian Abdul Murad

Abstract:

Introduction: Parkinson’s disease (PD) is a complex and asymptomatic disease where patients are usually diagnosed at late stage where about 70% of the dopaminergic neurons are lost. Therefore, identification of molecular biomarkers is crucial for early diagnosis of PD. MicroRNA (miRNA) is a short nucleotide non-coding small RNA which regulates the gene expression in post-translational process. The involvement of these miRNAs in neurodegenerative diseases includes maintenance of neuronal development, necrosis, mitochondrial dysfunction and oxidative stress. Thus, miRNA could be a potential biomarkers for diagnosis of PD. Objective: This study aim to identify the miRNA involved in Late Onset PD (LOPD) and Early Onset PD (EOPD) compared to the controls. Methods: This is a case-control study involved PD patients in the Chancellor Tunku Muhriz Hospital at the UKM Medical Centre. miRNA samples were extracted using miRNeasy serum/plasma kit from Qiagen. The quality of miRNA extracted was determined using Agilent RNA 6000 Nano kit in the Bioanalyzer. miRNA expression was performed using GeneChip miRNA 4.0 chip from Affymetrix. Microarray was performed in EOPD (n= 7), LOPD (n=9) and healthy control (n=11). Expression Console and Transcriptomic Analyses Console were used to analyze the microarray data. Result: miR-129-5p was significantly downregulated in EOPD compared to LOPD with -4.2 fold change (p = <0.050. miR-301a-3p was upregulated in EOPD compared to healthy control (fold = 10.3, p = <0.05). In LOPD versus healthy control, miR-486-3p (fold = 15.28, p = <0.05), miR-29c-3p (fold = 12.21, p = <0.05) and miR-301a-3p (fold = 10.01, p =< 0.05) were upregulated. Conclusion: Several miRNA have been identified to be differentially expressed in EOPD compared to LOPD and PD versus control. These miRNAs could serve as the potential biomarkers for early diagnosis of PD. However, these miRNAs need to be validated in a larger sample size.

Keywords: early onset PD, late onset PD, microRNA (miRNA), microarray

Procedia PDF Downloads 244
1807 A Perspective on Teaching Mathematical Concepts to Freshman Economics Students Using 3D-Visualisations

Authors: Muhammad Saqib Manzoor, Camille Dickson-Deane, Prashan Karunaratne

Abstract:

Cobb-Douglas production (utility) function is a fundamental function widely used in economics teaching and research. The key reason is the function's characteristics to describe the actual production using inputs like labour and capital. The characteristics of the function like returns to scale, marginal, and diminishing marginal productivities are covered in the introductory units in both microeconomics and macroeconomics with a 2-dimensional static visualisation of the function. However, less insight is provided regarding three-dimensional surface, changes in the curvature properties due to returns to scale, the linkage of the short-run production function with its long-run counterpart and marginal productivities, the level curves, and the constraint optimisation. Since (freshman) learners have diverse prior knowledge and cognitive skills, the existing “one size fits all” approach is not very helpful. The aim of this study is to bridge this gap by introducing technological intervention with interactive animations of the three-dimensional surface and sequential unveiling of the characteristics mentioned above using Python software. A small classroom intervention has helped students enhance their analytical and visualisation skills towards active and authentic learning of this topic. However, to authenticate the strength of our approach, a quasi-Delphi study will be conducted to ask domain-specific experts, “What value to the learning process in economics is there using a 2-dimensional static visualisation compared to using a 3-dimensional dynamic visualisation?’ Here three perspectives of the intervention were reviewed by a panel comprising of novice students, experienced students, novice instructors, and experienced instructors in an effort to determine the learnings from each type of visualisations within a specific domain of knowledge. The value of this approach is key to suggesting different pedagogical methods which can enhance learning outcomes.

Keywords: cobb-douglas production function, quasi-Delphi method, effective teaching and learning, 3D-visualisations

Procedia PDF Downloads 129
1806 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 63
1805 Friction Stir Processing of the AA7075T7352 Aluminum Alloy Microstructures Mechanical Properties and Texture Characteristics

Authors: Roopchand Tandon, Zaheer Khan Yusufzai, R. Manna, R. K. Mandal

Abstract:

Present work describes microstructures, mechanical properties, and texture characteristics of the friction stir processed AA7075T7352 aluminum alloy. Phases were analyzed with the help of x-ray diffractometre (XRD), transmission electron microscope (TEM) along with the differential scanning calorimeter (DSC). Depth-wise microstructures and dislocation characteristics from the nugget-zone of the friction stir processed specimens were studied using the bright field (BF) and weak beam dark-field (WBDF) TEM micrographs, and variation in the microstructures as well as dislocation characteristics were the noteworthy features found. XRD analysis display changes in the chemistry as well as size of the phases in the nugget and heat affected zones (Nugget and HAZ). Whereas the base metal (BM) microstructures remain un-affected. High density dislocations were noticed in the nugget regions of the processed specimen, along with the formation of dislocation contours and tangles. .The ɳ’ and ɳ phases, along with the GP-Zones were completely dissolved and trapped by the dislocations. Such an observations got corroborated to the improved mechanical as well as stress corrosion cracking (SCC) performances. Bulk texture and residual stress measurements were done by the Panalytical Empyrean MRD system with Co- kα radiation. Nugget zone (NZ) display compressive residual stress as compared to thermo-mechanically(TM) and heat affected zones (HAZ). Typical f.c.c. deformation texture components (e.g. Copper, Brass, and Goss) were seen. Such a phenomenon is attributed to the enhanced hardening as well as other mechanical performance of the alloy. Mechanical characterizations were done using the tensile test and Anton Paar Instrumented Micro Hardness tester. Enhancement in the yield strength value is reported from the 89MPa to the 170MPa; on the other hand, highest hardness value was reported in the nugget-zone of the processed specimens.

Keywords: aluminum alloy, mechanical characterization, texture characterstics, friction stir processing

Procedia PDF Downloads 88
1804 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 287
1803 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.

Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA

Procedia PDF Downloads 141
1802 Production and Application of Organic Waste Compost for Urban Agriculture in Emerging Cities

Authors: Alemayehu Agizew Woldeamanuel, Mekonnen Maschal Tarekegn, Raj Mohan Balakrishina

Abstract:

Composting is one of the conventional techniques adopted for organic waste management, but the practice is very limited in emerging cities despite the most of the waste generated is organic. This paper aims to examine the viability of composting for organic waste management in the emerging city of Addis Ababa, Ethiopia, by addressing the composting practice, quality of compost, and application of compost in urban agriculture. The study collects data using compost laboratory testing and urban farm households’ survey and uses descriptive analysis on the state of compost production and application, physicochemical analysis of the compost samples, and regression analysis on the urban farmer’s willingness to pay for compost. The findings of the study indicated that there is composting practice at a small scale, most of the producers use unsorted feedstock materials, aerobic composting is dominantly used, and the maturation period ranged from four to ten weeks. The carbon content of the compost ranges from 30.8 to 277.1 due to the type of feedstock applied, and this surpasses the ideal proportions for C:N ratio. The total nitrogen, pH, organic matter, and moisture content are relatively optimal. The levels of heavy metals measured for Mn, Cu, Pb, Cd and Cr⁶⁺ in the compost samples are also insignificant. In the urban agriculture sector, chemical fertilizer is the dominant type of soil input in crop productions but vegetable producers use a combination of both fertilizer and other organic inputs, including compost. The willingness to pay for compost depends on income, household size, gender, type of soil inputs, monitoring soil fertility, the main product of the farm, farming method and farm ownership. Finally, this study recommends the need for collaboration among stakeholders’ along the value chain of waste, awareness creation on the benefits of composting and addressing challenges faced by both compost producers and users.

Keywords: composting, emerging city, organic waste management, urban agriculture

Procedia PDF Downloads 295
1801 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover

Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae

Abstract:

Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.

Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling

Procedia PDF Downloads 121
1800 Designing a Crowbar for Women: An Ergonomic Approach

Authors: Prakash Chandra Dhara, Rupa Maity, Mousumi Chatterjee

Abstract:

Crowbars are used for the gardening purpose. The same tools are used by both male and female gardeners. The existing crowbars are suitable for the female gardeners. The present study was aimed to design a crowbar, which was required to use by the women for the gardening purpose, from the viewpoints of ergonomics. The study was carried out on 50 women in different villages of Howrah districts in West Bengal state. Different models of existing crowbars which were commonly used by the women were collected and evaluated by examining their shape and size. The problems of using existing crowbar were assessed by direct observation during its operation. The musculoskeletal disorder of the subjects for using the crowbar was evaluated by modified Nordic questionnaire method. The anthropometric dimensions, especially hand dimension, of the subjects were taken in standardized static conditions. Considering the problems of using the existing crowbars some design concepts were developed and accordingly three prototypes models (P1, P2, P3) of crowbar were prepared for designing of a modified crowbar for women. Psychophysical analysis of those prototypes was made by paired comparison tests. In the above test subjective preference for different characteristics of the crowbar, e.g., length, weight, length and breadth of the blade, handle diameter, position of the handle, were determined. From the results of the paired comparison test and percentile values of hand dimensions, a modified design of crowbar was suggested. The prototype model P1 possessed more preferred characteristics of the tool than that of other prototype models. In the final design, the weight of the tool and length of the blade was reduced from that of the existing crowbar. Other dimensions were also changed. Two handles were suggested in the redesigned tool for better gripping and operation. The modified crowbar was evaluated by studying the body joint angles, viz., wrist, shoulder and elbow, for assessing the suitability of the design. It was concluded that the redesigned crowbar was suitable for women’s use.

Keywords: body dimension, crowbar, ergo-design, women, hand anthropometry

Procedia PDF Downloads 242
1799 Assessment of Environmental Quality of an Urban Setting

Authors: Namrata Khatri

Abstract:

The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.

Keywords: environmental quality, UEQ, remote sensing, GIS

Procedia PDF Downloads 66
1798 Polymer Mixing in the Cavity Transfer Mixer

Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson

Abstract:

In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.

Keywords: Mixing, non-Newtonian fluids, polymers, rheology.

Procedia PDF Downloads 367
1797 The Urgency of Berth Deepening at the Port of Durban

Authors: Rowen Naicker, Dhiren Allopi

Abstract:

One of the major problems the Port of Durban is experiencing is addressing shallow spots aggravated by megaships that berth. In the recent years, the vessels that call at the Port have increased in size which calls for draughts that are much deeper. For this reason, these larger vessels can only berth under high tide to avoid the risk of running aground. In addition to this, the ships cannot sail in fully laden which does not make it feasible for ship owners. Further during the berthing materials are displaced from the seabed which result in shallow spots being developed. The permitted draft (under-keel allowance) for the Durban Container Terminal (DCT) is currently 12.2 m. Transnet National Ports Authority (TNPA) are currently investing in a dredging fleet worth almost two billion rand. One of the highlights of this investment would be the building of grab hopper dredger that would be dedicated to the Port by 2017. TNPA are trying various techniques to dissolve the reduction of draughts by implementing dredging maintenance projects but is this sufficient? The ideal resolution would be the deepening and widening of the berths. Plans for this project is in place, but the implementation process is a matter of urgency. The intention of this project will be to accommodate three big vessels rather than two which in turn will improve the turnaround time in the port. The berthing will then no longer depend on high tide to avoid ships running aground. The aim of this paper is to prove the implementation of deepening and widening of the Port of Durban is a matter of urgency. If the plan to deepen and widen the berths at DCT is delayed it will mean a loss of business for the South African economy. If larger vessels cannot be accommodated in the Port of Durban, it will bypass the busiest container handling facility in the Southern hemisphere. Shipping companies are compelled to use larger ships as opposed to smaller vessels to lower port and fuel costs. A delay in the expansion of DCT could also result in an escalation of costs.

Keywords: DCT, deepening, berth, port

Procedia PDF Downloads 387
1796 Influence of Degassing on the Curing Behaviour and Void Occurrence Properties of Epoxy / Anhydride Resin System

Authors: Latha Krishnan, Andrew Cobley

Abstract:

Epoxy resin is most widely used as matrices for composites of aerospace, automotive and electronic applications due to its outstanding mechanical properties. These properties are chiefly predetermined by the chemical structure of the prepolymer and type of hardener but can also be varied by the processing conditions such as prepolymer and hardener mixing, degassing and curing conditions. In this research, the effect of degassing on the curing behaviour and the void occurrence is experimentally evaluated for epoxy /anhydride resin system. The epoxy prepolymer was mixed with an anhydride hardener and accelerator in an appropriate quantity. In order to investigate the effect of degassing on the curing behaviour and void content of the resin, the uncured resin samples were prepared using three different methods: 1) no degassing 2) degassing on prepolymer and 3) degassing on mixed solution of prepolymer and hardener with an accelerator. The uncured resins were tested in differential scanning calorimeter (DSC) to observe the changes in curing behaviour of the above three resin samples by analysing factors such as gel temperature, peak cure temperature and heat of reaction/heat flow in curing. Additionally, the completely cured samples were tested in DSC to identify the changes in the glass transition temperature (Tg) between the three samples. In order to evaluate the effect of degassing on the void content and morphology changes in the cured epoxy resin, the fractured surfaces of cured epoxy resin were examined under the scanning electron microscope (SEM). In addition, the amount of void, void geometry and void fraction were also investigated using an optical microscope and image J software (image analysis software). It was found that degassing at different stages of resin mixing had significant effects on properties such as glass transition temperature, the void content and void size of the epoxy/anhydride resin system. For example, degassing (vacuum applied on the mixed resin) has shown higher glass transition temperature (Tg) with lower void content.

Keywords: anhydride epoxy, curing behaviour, degassing, void occurrence

Procedia PDF Downloads 204
1795 Effectiveness of Using Multiple Non-pharmacological Interventions to Prevent Delirium in the Hospitalized Elderly

Authors: Yi Shan Cheng, Ya Hui Yeh, Hsiao Wen Hsu

Abstract:

Delirium is an acute state of confusion, which is mainly the result of the interaction of many factors, including: age>65 years, comorbidity, cognitive function and visual/auditory impairment, dehydration, pain, sleep disorder, pipeline retention, general anesthesia and major surgery… etc. Researches show the prevalence of delirium in hospitalized elderly patients over 50%. If it doesn't improve in time, may cause cognitive decline or impairment, not only prolong the length of hospital stay but also increase mortality. Some studies have shown that multiple nonpharmacological interventions are the most effective and common strategies, which are reorientation, early mobility, promoting sleep and nutritional support (including water intake), could improve or prevent delirium in the hospitalized elderly. In Taiwan, only one research to compare the delirium incidence of the older patients who have received orthopedic surgery between multi-nonpharmacological interventions and general routine care. Therefore, the purpose of this study is to address the prevention or improvement of delirium incidence density in medical hospitalized elderly, provide clinical nurses as a reference for clinical implementation, and develop follow-up related research. This study is a quasi-experimental design using purposive sampling. Samples are from two wards: the geriatric ward and the general medicine ward at a medical center in central Taiwan. The sample size estimated at least 100, and then the data will be collected through a self-administered structured questionnaire, including: demographic and professional evaluation items. Case recruiting from 5/13/2023. The research results will be analyzed by SPSS for Windows 22.0 software, including descriptive statistics and inferential statistics: logistic regression、Generalized Estimating Equation(GEE)、multivariate analysis of variance(MANOVA).

Keywords: multiple nonpharmacological interventions, hospitalized elderly, delirium incidence, delirium

Procedia PDF Downloads 69
1794 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 59
1793 A Study on Thermal and Flow Characteristics by Solar Radiation for Single-Span Greenhouse by Computational Fluid Dynamics Simulation

Authors: Jonghyuk Yoon, Hyoungwoon Song

Abstract:

Recently, there are lots of increasing interest in a smart farming that represents application of modern Information and Communication Technologies (ICT) into agriculture since it provides a methodology to optimize production efficiencies by managing growing conditions of crops automatically. In order to obtain high performance and stability for smart greenhouse, it is important to identify the effect of various working parameters such as capacity of ventilation fan, vent opening area and etc. In the present study, a 3-dimensional CFD (Computational Fluid Dynamics) simulation for single-span greenhouse was conducted using the commercial program, Ansys CFX 18.0. The numerical simulation for single-span greenhouse was implemented to figure out the internal thermal and flow characteristics. In order to numerically model solar radiation that spread over a wide range of wavelengths, the multiband model that discretizes the spectrum into finite bands of wavelength based on Wien’s law is applied to the simulation. In addition, absorption coefficient of vinyl varied with the wavelength bands is also applied based on Beer-Lambert Law. To validate the numerical method applied herein, the numerical results of the temperature at specific monitoring points were compared with the experimental data. The average error rates (12.2~14.2%) between them was shown and numerical results of temperature distribution are in good agreement with the experimental data. The results of the present study can be useful information for the design of various greenhouses. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries (IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA)(315093-03).

Keywords: single-span greenhouse, CFD (computational fluid dynamics), solar radiation, multiband model, absorption coefficient

Procedia PDF Downloads 122
1792 Preliminary Evaluation of Maximum Intensity Projection SPECT Imaging for Whole Body Tc-99m Hydroxymethylene Diphosphonate Bone Scanning

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Kyoko Saito

Abstract:

Bone scintigraphy is widely used as a screening tool for bone metastases. However, the 180 to 240 minutes (min) waiting time after the intravenous (i.v.) injection of the tracer is both long and tiresome. To solve this shortcoming, a bone scan with a shorter waiting time is needed. In this study, we applied the Maximum Intensity Projection (MIP) and triple energy window (TEW) scatter correction to a whole body bone SPECT (Merged SPECT) and investigated shortening the waiting time. Methods: In a preliminary phantom study, hot gels of 99mTc-HMDP were inserted into sets of rods with diameters ranging from 4 to 19 mm. Each rod set covered a sector of a cylindrical phantom. The activity concentration of all rods was 2.5 times that of the background in the cylindrical body of the phantom. In the human study, SPECT images were obtained from chest to abdomen at 30 to 180 min after 99mTc- hydroxymethylene diphosphonate (HMDP) injection of healthy volunteers. For both studies, MIP images were reconstructed. Planar whole body images of the patients were also obtained. These were acquired at 200 min. The image quality of the SPECT and the planar images was compared. Additionally, 36 patients with breast cancer were scanned in the same way. The delectability of uptake regions (metastases) was compared visually. Results: In the phantom study, a 4 mm size hot gel was difficult to depict on the conventional SPECT, but MIP images could recognize it clearly. For both the healthy volunteers and the clinical patients, the accumulation of 99mTc-HMDP in the SPECT was good as early as 90 min. All findings of both image sets were in agreement. Conclusion: In phantoms, images from MIP with TEW scatter correction could detect all rods down to those with a diameter of 4 mm. In patients, MIP reconstruction with TEW scatter correction could improve the detectability of hot lesions. In addition, the time between injection and imaging could be shortened from that conventionally used for whole body scans.

Keywords: merged SPECT, MIP, TEW scatter correction, 99mTc-HMDP

Procedia PDF Downloads 402
1791 Multi-Walled Carbon Nanotubes as Nucleating Agents

Authors: Rabindranath Jana, Plabani Basu, Keka Rana

Abstract:

Nucleating agents are widely used to modify the properties of various polymers. The rate of crystallization and the size of the crystals have a strong impact on mechanical and optical properties of a polymer. The addition of nucleating agents to the semi-crystalline polymers provides a surface on which the crystal growth can start easily. As a consequence, fast crystal formation will result in many small crystal domains so that the cycle times for injection molding may be reduced. Moreover, the mechanical properties e.g., modulus, tensile strength, heat distortion temperature and hardness may increase. In the present work, multi-walled carbon nanotubes (MWNTs) as nucleating agents for the crystallization of poly (e-caprolactone)diol (PCL). Thus nanocomposites of PCL filled with MWNTs were prepared by solution blending. Differential scanning calorimetry (DSC) tests were carried out to study the effect of CNTs on on-isothermal crystallization of PCL. The polarizing optical microscopy (POM), and wide-angle X-ray diffraction (WAXD) were used to study the morphology and crystal structure of PCL and its nanocomposites. It is found that MWNTs act as effective nucleating agents that significantly shorten the induction period of crystallization and however, decrease the crystallization rate of PCL, exhibiting a remarkable decrease in the Avrami exponent n, surface folding energy σe and crystallization activation energy ΔE. The carbon-based fillers act as templates for hard block chains of PCL to form an ordered structure on the surface of nanoparticles during the induction period, bringing about some increase in equilibrium temperature. The melting process of PCL and its nanocomposites are also studied; the nanocomposites exhibit two melting peaks at higher crystallization temperature which mainly refer to the melting of the crystals with different crystal sizes however, PCL shows only one melting temperature.

Keywords: poly(e-caprolactone)diol, multiwalled carbon nanotubes, composite materials, nonisothermal crystallization, crystal structure, nucleation

Procedia PDF Downloads 483
1790 Simulation Study on Effects of Surfactant Properties on Surfactant Enhanced Oil Recovery from Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsaeter

Abstract:

One objective of this work is to analyze the effects of surfactant properties (viscosity, concentration, and adsorption) on surfactant enhanced oil recovery at laboratory scale. The other objective is to obtain the functional relationships between surfactant properties and the ultimate oil recovery and oil recovery rate. A core is cut into two parts from the middle to imitate the matrix with a horizontal fracture. An injector and a producer are at the left and right sides of the fracture separately. The middle slice of the core is used as the model in this paper, whose size is 4cm x 0.1cm x 4.1cm, and the space of the fracture in the middle is 0.1 cm. The original properties of matrix, brine, oil in the base case are from Ekofisk Field. The properties of surfactant are from literature. Eclipse is used as the simulator. The results are followings: 1) The viscosity of surfactant solution has a positive linear relationship with surfactant oil recovery time. And the relationship between viscosity and oil production rate is an inverse function. The viscosity of surfactant solution has no obvious effect on ultimate oil recovery. Since most of the surfactant has no big effect on viscosity of brine, the viscosity of surfactant solution is not a key parameter of surfactant screening for surfactant flooding in fractured reservoirs. 2) The increase of surfactant concentration results a decrease of oil recovery rate and an increase of ultimate oil recovery. However, there are no functions could describe the relationships. Study on economy should be conducted because of the price of surfactant and oil. 3) In the study of surfactant adsorption, assume that the matrix wettability is changed to water-wet when the surfactant adsorption is to the maximum at all cases. And the ratio of surfactant adsorption and surfactant concentration (Cads/Csurf) is used to estimate the functional relationship. The results show that the relationship between ultimate oil recovery and Cads/Csurf is a logarithmic function. The oil production rate has a positive linear relationship with exp(Cads/Csurf). The work here could be used as a reference for the surfactant screening of surfactant enhanced oil recovery from fractured reservoirs. And the functional relationships between surfactant properties and the oil recovery rate and ultimate oil recovery help to improve upscaling methods.

Keywords: fractured reservoirs, surfactant adsorption, surfactant concentration, surfactant EOR, surfactant viscosity

Procedia PDF Downloads 161
1789 Investigating the Atmospheric Phase Distribution of Inorganic Reactive Nitrogen Species along the Urban Transect of Indo Gangetic Plains

Authors: Reema Tiwari, U. C. Kulshrestha

Abstract:

As a key regulator of atmospheric oxidative capacity and secondary aerosol formations, the signatures of reactive nitrogen (Nr) emissions are becoming increasingly evident in the cascade of air pollution, acidification, and eutrophication of the ecosystem. However, their accurate estimates in N budget remains limited by the photochemical conversion processes where occurrence of differential atmospheric residence time of gaseous (NOₓ, HNO₃, NH₃) and particulate (NO₃⁻, NH₄⁺) Nr species becomes imperative to their spatio temporal evolution on a synoptic scale. The present study attempts to quantify such interactions under tropical conditions when low anticyclonic winds become favorable to the advections from west during winters. For this purpose, a diurnal sampling was conducted using low volume sampler assembly where ambient concentrations of Nr trace gases along with their ionic fractions in the aerosol samples were determined with UV-spectrophotometer and ion chromatography respectively. The results showed a spatial gradient of the gaseous precursors with a much pronounced inter site variability (p < 0.05) than their particulate fractions. Such observations were confirmed for their limited photochemical conversions where less than 1 ratios of day and night measurements (D/N) for the different Nr fractions suggested an influence of boundary layer dynamics at the background site. These phase conversion processes were further corroborated with the molar ratios of NOₓ/NOᵧ and NH₃/NHₓ where incomplete titrations of NOₓ and NH₃ emissions were observed irrespective of their diurnal phases along the sampling transect. Their calculations with equilibrium based approaches for an NH₃-HNO₃-NH₄NO₃ system, on the other hand, were characterized by delays in equilibrium attainment where plots of their below deliquescence Kₘ and Kₚ values with 1000/T confirmed the role of lower temperature ranges in NH₄NO₃ aerosol formation. These results would help us in not only resolving the changing atmospheric inputs of reduced (NH₃, NH₄⁺) and oxidized (NOₓ, HNO₃, NO₃⁻) Nr estimates but also in understanding the dependence of Nr mixing ratios on their local meteorological conditions.

Keywords: diurnal ratios, gas-aerosol interactions, spatial gradient, thermodynamic equilibrium

Procedia PDF Downloads 119
1788 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic

Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo

Abstract:

Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.

Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate

Procedia PDF Downloads 458
1787 Characterization of A390 Aluminum Alloy Produced at Different Slow Shot Speeds Using Assisted Vacuum High-Pressure Die Casting

Authors: Wenbo Yu, Zihao Yuan, Zhipeng Guo, Shoumei Xiong

Abstract:

Under different slow shot speeds in vacuum assisted high pressure die casting (VHPDC) process, plate-shaped specimens of hypereutectic A390 aluminum alloy were produced. According to the results, the vacuum pressure inside the die cavity increased linearly with the increasing slow shot speed at the beginning of mold filling. Meanwhile, it was found that the tensile properties of vacuum die castings were deteriorated by the porosity content. In addition, the average primary Si size varies between 14µm to 23µm, which has a binary functional relationship with the slow shot speeds. Due to the vacuum effect, the castings were treated by T6 heat treatment. After heat treatment, microstructural morphologies revealed that needle-shaped and thin-flaked eutectic Si particles became rounded while Al2Cu dissolved into α-Al matrix. For the as-received sample in-situ tensile test, microcracks firstly initiate at the primary Si particles and propagated along Al matrix with a transgranular fracture mode. In contrast, for the treated sample, the crack initiated at the Al2Cu particles and propagated along Al grain boundaries with an intergranular fracture mode. In-situ three bending test, microcracks firstly formed in the primary Si particles for both samples. Subsequently, the cracks between primary Si linked along Al grain boundaries in as received sample. In contrast, the cracks in primary Si linked through the solid lines in Al matrix. Furthermore, the fractography revealed that the fracture mechanism has evolved from brittle transgranular fracture to a fracture mode with many dimples after heat treatment.

Keywords: A390 aluminum, vacuum assisted high pressure die casting, heat treatment, mechanical properties

Procedia PDF Downloads 236
1786 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 84
1785 Quantification and Evaluation of Tumors Heterogeneity Utilizing Multimodality Imaging

Authors: Ramin Ghasemi Shayan, Morteza Janebifam

Abstract:

Tumors are regularly inhomogeneous. Provincial varieties in death, metabolic action, multiplication and body part are watched. There’s expanding proof that strong tumors may contain subpopulations of cells with various genotypes and phenotypes. These unmistakable populaces of malignancy cells can connect during a serious way and may contrast in affectability to medications. Most tumors show organic heterogeneity1–3 remembering heterogeneity for genomic subtypes, varieties inside the statement of development variables and genius, and hostile to angiogenic factors4–9 and varieties inside the tumoural microenvironment. These can present as contrasts between tumors in a few people. for instance, O6-methylguanine-DNA methyltransferase, a DNA fix compound, is hushed by methylation of the quality advertiser in half of glioblastoma (GBM), adding to chemosensitivity, and improved endurance. From the outset, there includes been specific enthusiasm inside the usage of dissemination weighted imaging (DWI) and dynamic complexity upgraded MRI (DCE-MRI). DWI sharpens MRI to water dispersion inside the extravascular extracellular space (EES) and is wiped out with the size and setup of the cell populace. Additionally, DCE-MRI utilizes dynamic obtaining of pictures during and after the infusion of intravenous complexity operator. Signal changes are additionally changed to outright grouping of differentiation permitting examination utilizing pharmacokinetic models. PET scan modality gives one of a kind natural particularity, permitting dynamic or static imaging of organic atoms marked with positron emanating isotopes (for example, 15O, 18F, 11C). The strategy is explained to a colossal radiation portion, which points of confinement rehashed estimations, particularly when utilized together with PC tomography (CT). At long last, it's of incredible enthusiasm to quantify territorial hemoglobin state, which could be joined with DCE-CT vascular physiology estimation to create significant experiences for understanding tumor hypoxia.

Keywords: heterogeneity, computerized tomography scan, magnetic resonance imaging, PET

Procedia PDF Downloads 137
1784 Management Challenges and Product Quality of Fish Farms in Greece

Authors: S. Anastasiou, C. Nathanailides, S. Logothetis, G. Kanlis

Abstract:

The Greek aquaculture industry is second most important economic sector for the growth of the Greek Economy. The purpose of the present work is to present some data for the management challenges that the Aquaculture industry in Greece is currently facing. Currently the Greek aquaculture industry is going through a series of mergers and restructure. The financial status of the different aquaculture companies, the working conditions and management practices may vary according to lending exposure, market mix, company size, and technological parameters of the different fish farm units and rearing systems. Frequently, the aquaculture personnel are exposed to harsh environmental conditions and to occupational risk. Furthermore, there is pressure on the personnel of fish farms to constantly improve their production efficiency and to enhance their work skills to the new methods and practices which are adopted by the aquaculture industry. There is some data to suggest the existence of gender inequality in the workforce of Greek fish farms. Women are paid less, frequently absent higher managerial positions and most of the male workmates consider the job to harsh for women. Nevertheless, high level of job satisfaction was observed in both men and women. This high level of job satisfaction of the aquaculture personnel can be attributed, at least partially, to the nature of the work which has a very distinct working environment but most of the staff has very positive experiences with the interaction with their workmates and the satisfaction of being in a business which always exceeds its production target. Indeed, there is some evidence to suggest that the Greek aquaculture industry is always exceeding its production targets, while it is rapidly adopting and improving new technology, constantly improving of human resources management practices, which include constant training of the staff, very good communication channels between management and the personnel and reducing the risk of occupational hazard to the aquaculture personnel. All these parameters of management may have a determining role for the volume and quality of the production and future of this sector in Greece.

Keywords: aquaculture, fish quality, management, production targets

Procedia PDF Downloads 431
1783 Zinc Nanoparticles Modified Electrode as an Insulin Sensor

Authors: Radka Gorejova, Ivana Sisolakova, Jana Shepa, Frederika Chovancova, Renata Orinakova

Abstract:

Diabetes mellitus (DM) is a serious metabolic disease characterized by chronic hyperglycemia. Often, the symptoms are not sufficiently observable at early stages, and so hyperglycemia causes pathological and functional changes before the diagnosis of the DM. Therefore, the development of an electrochemical sensor that will be fast, accurate, and instrumentally undemanding is currently needful. Screen-printed carbon electrodes (SPCEs) can be considered as the most suitable matrix material for insulin sensors because of the small size of the working electrode. It leads to the analyst's volume reduction to only 50 µl for each measurement. The surface of bare SPCE was modified by a combination of chitosan, multi-walled carbon nanotubes (MWCNTs), and zinc nanoparticles (ZnNPs) to obtain better electrocatalytic activity towards insulin oxidation. ZnNPs were electrochemically deposited on the chitosan-MWCNTs/SPCE surface using the pulse deposition method. Thereafter, insulin was determined on the prepared electrode using chronoamperometry and electrochemical impedance spectroscopy (EIS). The chronoamperometric measurement was performed by adding a constant amount of insulin in 0.1 M NaOH and PBS (2 μl) with the concentration of 2 μM, and the current response of the system was monitored after a gradual increase in concentration. Subsequently, the limit of detection (LOD) of the prepared electrode was determined via the Randles-Ševčík equation. The LOD was 0.47 µM. Prepared electrodes were studied also as the impedimetric sensors for insulin determination. Therefore, various insulin concentrations were determined via EIS. Based on the performed measurements, the ZnNPs/chitosan-MWCNTs/SPCE can be considered as a potential candidate for novel electrochemical sensor for insulin determination. Acknowledgments: This work has been supported by the projects Visegradfund project number 22020140, VEGA 1/0095/21 of the Slovak Scientific Grant Agency, and APVV-PP-COVID-20-0036 of the Slovak Research and Development Agency.

Keywords: zinc nanoparticles, insulin, chronoamperometry, electrochemical impedance spectroscopy

Procedia PDF Downloads 113
1782 Brain Atrophy in Alzheimer's Patients

Authors: Tansa Nisan Gunerhan

Abstract:

Dementia comes in different forms, including Alzheimer's disease. The most common dementia diagnosis among elderly individuals is Alzheimer's disease. On average, for patients with Alzheimer’s, life expectancy is around 4-8 years after the diagnosis; however, expectancy can go as high as twenty years or more, depending on the shrinkage of the brain. Normally, along with aging, the brain shrinks at some level but doesn’t lose a vast amount of neurons. However, Alzheimer's patients' neurons are destroyed rapidly; hence problems with loss of memory, communication, and other metabolic activities begin. The toxic changes in the brain affect the stability of the neurons. Beta-amyloid and tau are two proteins that are believed to play a role in the development of Alzheimer's disease through their toxic changes. Beta-amyloid is a protein that is produced in the brain and is normally broken down and removed from the body. However, in people with Alzheimer's disease, the production of beta-amyloid increases, and it begins to accumulate in the brain. These plaques are thought to disrupt communication between nerve cells and may contribute to the death of brain cells. Tau is a protein that helps to stabilize microtubules, which are essential for the transportation of nutrients and other substances within brain cells. In people with Alzheimer's disease, tau becomes abnormal and begins to accumulate inside brain cells, forming neurofibrillary tangles. These tangles disrupt the normal functioning of brain cells and may contribute to their death, forming amyloid plaques which are deposits of a protein called amyloid-beta that build up between nerve cells in the brain. The accumulation of amyloid plaques and neurofibrillary tangles in the brain is thought to contribute to the shrinkage of brain tissue. As the brain shrinks, the size of the brain may decrease, leading to a reduction in brain volume. Brain atrophy in Alzheimer's disease is often accompanied by changes in the structure and function of brain cells and the connections between them, leading to a decline in brain function. These toxic changes that accumulate can cause symptoms such as memory loss, difficulty with thinking and problem-solving, and changes in behavior and personality.

Keywords: Alzheimer, amyloid-beta, brain atrophy, neuron, shrinkage

Procedia PDF Downloads 80