Search results for: time step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24426

Search results for: time step size

24216 Household Size and Poverty Rate: Evidence from Nepal

Authors: Basan Shrestha

Abstract:

The relationship between the household size and the poverty is not well understood. Malthus followers advocate that the increasing population add pressure to the dwindling resource base due to increasing demand that would lead to poverty. Others claim that bigger households are richer due to availability of household labour for income generation activities. Facts from Nepal were analyzed to examine the relationship between the household size and poverty rate. The analysis of data from 3,968 Village Development Committee (VDC)/ municipality (MP) located in 75 districts of all five development regions revealed that the average household size had moderate positive correlation with the poverty rate (Karl Pearson's correlation coefficient=0.44). In a regression analysis, the household size determined 20% of the variation in the poverty rate. Higher positive correlation was observed in eastern Nepal (Karl Pearson's correlation coefficient=0.66). The regression analysis showed that the household size determined 43% of the variation in the poverty rate in east. The relation was poor in far-west. It could be because higher incidence of poverty was there irrespective of household size. Overall, the facts revealed that the bigger households were relatively poorer. With the increasing level of awareness and interventions for family planning, it is anticipated that the household size will decrease leading to the decreased poverty rate. In addition, the government needs to devise a mechanism to create employment opportunities for the household labour force to reduce poverty.

Keywords: household size, poverty rate, nepal, regional development

Procedia PDF Downloads 361
24215 Lipid-Chitosan Hybrid Nanoparticles for Controlled Delivery of Cisplatin

Authors: Muhammad Muzamil Khan, Asadullah Madni, Nina Filipczek, Jiayi Pan, Nayab Tahir, Hassan Shah, Vladimir Torchilin

Abstract:

Lipid-polymer hybrid nanoparticles (LPHNP) are delivery systems for controlled drug delivery at tumor sites. The superior biocompatible properties of lipid and structural advantages of polymer can be obtained via this system for controlled drug delivery. In the present study, cisplatin-loaded lipid-chitosan hybrid nanoparticles were formulated by the single step ionic gelation method based on ionic interaction of positively charged chitosan and negatively charged lipid. Formulations with various chitosan to lipid ratio were investigated to obtain the optimal particle size, encapsulation efficiency, and controlled release pattern. Transmission electron microscope and dynamic light scattering analysis demonstrated a size range of 181-245 nm and a zeta potential range of 20-30 mV. Compatibility among the components and the stability of formulation were demonstrated with FTIR analysis and thermal studies, respectively. The therapeutic efficacy and cellular interaction of cisplatin-loaded LPHNP were investigated using in vitro cell-based assays in A2780/ADR ovarian carcinoma cell line. Additionally, the cisplatin loaded LPHNP exhibited a low toxicity profile in rats. The in-vivo pharmacokinetics study also proved a controlled delivery of cisplatin with enhanced mean residual time and half-life. Our studies suggested that the cisplatin-loaded LPHNP being a promising platform for controlled delivery of cisplatin in cancer therapy.

Keywords: cisplatin, lipid-polymer hybrid nanoparticle, chitosan, in vitro cell line study

Procedia PDF Downloads 130
24214 Influence of the Flow Rate Ratio in a Jet Pump on the Size of Air Bubbles

Authors: L. Grinis, N. Lubashevsky, Y. Ostrovski

Abstract:

In waste water treatment processes, aeration introduces air into a liquid. In these systems, air is introduced by different devices submerged in the waste water. Smaller bubbles result in more bubble surface area per unit of volume and higher oxygen transfer efficiency. Jet pumps are devices that use air bubbles and are widely used in waste water treatment processes. The principle of jet pumps is their ability to transfer energy of one fluid, called primary or motive, into a secondary fluid or gas. These pumps have no moving parts and are able to work in remote areas under extreme conditions. The objective of this work is to study experimentally the characteristics of the jet pump and the size of air bubbles in the laboratory water tank. The effect of flow rate ratio on pump performance is investigated in order to have a better understanding about pump behavior under various conditions, in order to determine the efficiency of receiving air bubbles different sizes. The experiments show that we should take care when increasing the flow rate ratio while seeking to decrease bubble size in the outlet flow. This study will help improve and extend the use of the jet pump in many practical applications.

Keywords: jet pump, air bubbles size, retention time, waste water

Procedia PDF Downloads 307
24213 Hydrodynamics of Wound Ballistics

Authors: Harpreet Kaur, Er. Arjun, Kirandeep Kaur, P. K. Mittal

Abstract:

Simulation of a human body from a 20% gelatin & 80% water mixture is examined from a wound ballistics point of view. Parameters such as incapacitation energy & temporary to permanent cavity size & tools of hydrodynamics have been employed to arrive at a model of the human body similar to the one adopted by NATO. Calculations using equations of motion yield a value of 339 µs in which a temporary cavity with maximum size settles down to a permanent cavity. This occurs for 10mm size bullets & settles down to a permanent cavity in the case of 4 different bullets, i.e., 5.45, 5.56, 7.62,10 mm sizes. The obtained results are in excellent agreement with the body as a right circular cylinder of 15 cm height & 10 cm diameter. An effort is made here in this work to present a sound theoretical base to parameters commonly used in wound ballistics from field experience discussed by Col Coats & Major Beyer.

Keywords: gelatine, gunshot, hydrodynamic model, oscillation time, temporary and permanent cavity, wound ballistics

Procedia PDF Downloads 75
24212 Effect of Austenitizing Temperature, Soaking Time and Grain Size on Charpy Impact Toughness of Quenched and Tempered Steel

Authors: S. Gupta, R. Sarkar, S. Pathak, D. H. Kela, A. Pramanick, P. Talukdar

Abstract:

Low alloy quenched and tempered steels are typically used in cast railway components such as knuckles, yokes, and couplers. Since these components experience extensive impact loading during their service life, adequate impact toughness of these grades need to be ensured to avoid catastrophic failure of parts in service. Because of the general availability of Charpy V Test equipment, Charpy test is the most common and economical means to evaluate the impact toughness of materials and is generally used in quality control applications. With this backdrop, an experiment was designed to evaluate the effect of austenitizing temperature, soaking time and resultant grain size on the Charpy impact toughness and the related fracture mechanisms in a quenched and tempered low alloy steel, with the aim of optimizing the heat treatment parameters (i.e. austenitizing temperature and soaking time) with respect to impact toughness. In the first phase, samples were austenitized at different temperatures viz. 760, 800, 840, 880, 920 and 960°C, followed by quenching and tempering at 600°C for 4 hours. In the next phase, samples were subjected to different soaking times (0, 2, 4 and 6 hours) at a fixed austenitizing temperature (980°C), followed by quenching and tempering at 600°C for 4 hours. The samples corresponding to different test conditions were then subjected to instrumented Charpy tests at -40°C and energy absorbed were recorded. Subsequently, microstructure and fracture surface of samples corresponding to different test conditions were observed under scanning electron microscope, and the corresponding grain sizes were measured. In the final stage, austenitizing temperature, soaking time and measured grain sizes were correlated with impact toughness and the fracture morphology and mechanism.

Keywords: heat treatment, grain size, microstructure, retained austenite and impact toughness

Procedia PDF Downloads 338
24211 Evaluation of Gene Expression after in Vitro Differentiation of Human Bone Marrow-Derived Stem Cells to Insulin-Producing Cells

Authors: Mahmoud M. Zakaria, Omnia F. Elmoursi, Mahmoud M. Gabr, Camelia A. AbdelMalak, Mohamed A. Ghoneim

Abstract:

Many protocols were publicized for differentiation of human mesenchymal stem cells (MSCS) into insulin-producing cells (IPCs) in order to excrete insulin hormone ingoing to treat diabetes disease. Our aim is to evaluate relative gene expression for each independent protocol. Human bone marrow cells were derived from three volunteers that suffer diabetes disease. After expansion of mesenchymal stem cells, differentiation of these cells was done by three different protocols (the one-step protocol was used conophylline protein, the two steps protocol was depending on trichostatin-A, and the three-step protocol was started by beta-mercaptoethanol). Evaluation of gene expression was carried out by real-time PCR: Pancreatic endocrine genes, transcription factors, glucose transporter, precursor markers, pancreatic enzymes, proteolytic cleavage, extracellular matrix and cell surface protein. Quantitation of insulin secretion was detected by immunofluorescence technique in 24-well plate. Most of the genes studied were up-regulated in the in vitro differentiated cells, and also insulin production was observed in the three independent protocols. There were some slight increases in expression of endocrine mRNA of two-step protocol and its insulin production. So, the two-step protocol was showed a more efficient in expressing of pancreatic endocrine genes and its insulin production than the other two protocols.

Keywords: mesenchymal stem cells, insulin producing cells, conophylline protein, trichostatin-A, beta-mercaptoethanol, gene expression, immunofluorescence technique

Procedia PDF Downloads 215
24210 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 107
24209 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data

Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.

Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query

Procedia PDF Downloads 162
24208 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 207
24207 Macroeconomic Determinants of Cyclical Variations in Value, Size, and Momentum Premium in the UK

Authors: G. Sarwar, C. Mateus, N. Todorovic

Abstract:

The paper examines the asymmetries in size, value and momentum premium over the economic cycles in the UK and their macroeconomic determinants. Using Markov switching approach we find clear evidence of cyclical variations of the three premiums, most noticeably variations in size premium. We associate Markov switching regime 1 with economic upturn and regime 2 with economic downturn as per OECD’s Composite Leading Indicator. The macroeconomic indicators prompting such cyclicality the most are interest rates, term structure and credit spread. The role of GDP growth, money supply and inflation is less pronounced in our sample.

Keywords: macroeconomic determinants, Markorv Switching, size, value

Procedia PDF Downloads 486
24206 Class Size Effects on Reading Achievement in Europe: Evidence from Progress in International Reading Literacy Study

Authors: Ting Shen, Spyros Konstantopoulos

Abstract:

During the past three decades, class size effects have been a focal debate in education. The idea of having smaller class is enormously popular among parents, teachers and policy makers. The rationale of its popularity is that small classroom could provide a better learning environment in which there would be more teacher-pupil interaction and more individualized instruction. This early stage benefits would also have a long-term positive effect. It is a common belief that reducing class size may result in increases in student achievement. However, the empirical evidence about class-size effects from experimental or quasi-experimental studies has been mixed overall. This study sheds more light on whether class size reduction impacts reading achievement in eight European countries: Bulgaria, Germany, Hungary, Italy, Lithuania, Romania, Slovakia, and Slovenia. We examine class size effects on reading achievement using national probability samples of fourth graders. All eight European countries had participated in the Progress in International Reading Literacy Study (PIRLS) in 2001, 2006 and 2011. Methodologically, the quasi-experimental method of instrumental variables (IV) has been utilized to facilitate causal inference of class size effects. Overall, the results indicate that class size effects on reading achievement are not significant across countries and years. However, class size effects are evident in Romania where reducing class size increases reading achievement. In contrast, in Germany, increasing class size seems to increase reading achievement. In future work, it would be valuable to evaluate differential class size effects for minority or economically disadvantaged student groups or low- and high-achievers. Replication studies with different samples and in various settings would also be informative. Future research should continue examining class size effects in different age groups and countries using rich international databases.

Keywords: class size, reading achievement, instrumental variables, PIRLS

Procedia PDF Downloads 291
24205 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 513
24204 Generation of Automated Alarms for Plantwide Process Monitoring

Authors: Hyun-Woo Cho

Abstract:

Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.

Keywords: detection, monitoring, process data, noise

Procedia PDF Downloads 252
24203 Application of Sustainable Agriculture Based on LEISA in Landscape Design of Integrated Farming

Authors: Eduwin Eko Franjaya, Andi Gunawan, Wahju Qamara Mugnisjah

Abstract:

Sustainable agriculture in the form of integrated farming with its LEISA (Low External Input Sustainable Agriculture) concept has brought a positive impact on agriculture development and ambient amelioration. But, most of the small farmers in Indonesia did not know how to put the concept of it and how to combine agricultural commodities on the site effectively and efficiently. This research has an aim to promote integrated farming (agrofisheries, etc) to the farmers by designing the agricultural landscape to become integrated farming landscape as medium of education for the farmers. The method used in this research is closely related with the rule of design in the landscape architecture science. The first step is inventarization for the existing condition on the research site. The second step is analysis. Then, the third step is concept-making that consists of base concept, design concept, and developing concept. The base concept used in this research is sustainable agriculture with LEISA. The concept design is related with activity base on site. The developing concept consists of space concept, circulation, vegetation and commodity, production system, etc. The fourth step as the final step is planning and design. This step produces site plan of integrated farming based on LEISA. The result of this research is site plan of integrated farming with its explanation, including the energy flow of integrated farming system on site and the production calendar of integrated farming commodities for education and agri-tourism opportunity. This research become the right way to promote the integrated farming and also as a medium for the farmers to learn and to develop it.

Keywords: integrated farming, LEISA, planning and design, site plan

Procedia PDF Downloads 512
24202 Small-Sided Games in Football: Effect of Field Sizes on Technical Parameters

Authors: Faruk Guven, Nurtekin Erkmen, Samet Aktas, Cengiz Taskin

Abstract:

The aim of this study was to determine effects of field sizes on technical parameters of small-sided games in football players. Eight amateur football players (27.23±3.08 years, heigth: 171.01±5.36 cm, body weigth: 66.86±4.54 kg, sports experience: 12.88±3.28 years) performed 4-a-side small-sided games (SSG) with different field sizes. In SSGs, field sizes were 30 x 40 m and 26 mx24 m. SSGs was conducted as a series of 3 bouts of 6 min with 5 min recovery durations. All SSGs were video recorded using two digital video camcorder positioned on a tripot. Shoot on taget, passes, succesful passes, unsuccesful passes, dripling, tackle, possession in SSGs were counted by Mathball Match Analysis System. The effects of bouts on technical score were examined separately using a Friedman’s test. Mann Whitney U test was applied to analyse differences between field sizes. There were no significant differences in shoots on target, total pass, successful pass, tackle, interception, possession between bouts in 30x40 m field size (p>0.05). Unsuccessful pass in bout 3 for 30x40 m field size was lower than bout 1 and bout 2 (p<0.05) and dripling in bout 3 was lower than bout 2 (p<0.05). There was no significant difference in technical actions between bouts for 26x34 m field size (p>0.05). Shoot on target in SSG with 26 x 34 m field size was higher than SSG with 30x40 m field size (p<0.05). Unsuccessful pass for 26x34 m field size in bout 3 was higher than SSG with 30x40 m field size (p<0.05). There was no significant difference in technical actions between field sizes (p>0.05). In conclusion; in this study demonstrates that technical actions in a-4-side SSG are not influenced by different field sizes (for 30x40 m and 26x34 m field sizes). This consequence is same for both total SSG time and each bout. Dripling and unsuccessful pass decrease in bout 3 during SSG in 30 x 40 m field size.

Keywords: small-sided games, football, technical actions, sport science

Procedia PDF Downloads 552
24201 Bioremediation of Disposed X-Ray Film for Nanoparticles Production

Authors: Essam A. Makky, Siti H. Mohd Rasdi, J. B. Al-Dabbagh, G. F. Najmuldeen

Abstract:

The synthesis of silver nano particles (SNPs) extensively studied by using chemical and physical methods. Here, the biological methods were used and give benefits in research field in the aspect of very low cost (from waste to wealth) and safe time as well. The study aims to isolate and exploit the microbial power in the production of industrially important by-products in nano-size with high economic value, to extract highly valuable materials from hazardous waste, to quantify nano particle size, and characterization of SNPs by X-Ray Diffraction (XRD) analysis. Disposal X-ray films were used as substrate because it consumes about 1000 tons of total silver chemically produced worldwide annually. This silver is being wasted when these films are used and disposed. Different bacterial isolates were obtained from various sources. Silver was extracted as nano particles by microbial power degradation from disposal X-ray film as the sole carbon source for ten days incubation period in darkness. The protein content was done and all the samples were analyzed using XRD, to characterize of silver (Ag) nano particles size in the form of silver nitrite. Bacterial isolates CL4C showed the average size of SNPs about 19.53 nm, GL7 showed average size about 52.35 nm and JF Outer 2A (PDA) showed 13.52 nm. All bacterial isolates partially identified using Gram’s reaction and the results obtained exhibited that belonging to Bacillus sp.

Keywords: nanotechnology, bioremediation, disposal X-ray film, nanoparticle, waste, XRD

Procedia PDF Downloads 483
24200 An Impairment of Spatiotemporal Gait Adaptation in Huntington's Disease when Navigating around Obstacles

Authors: Naznine Anwar, Kim Cornish, Izelle Labuschagne, Nellie Georgiou-Karistianis

Abstract:

Falls and subsequent injuries are common features in symptomatic Huntington’s disease (symp-HD) individuals. As part of daily walking, navigating around obstacles may incur a greater risk of falls in symp-HD. We designed obstacle-crossing experiment to examine adaptive gait dynamics and to identify underlying spatiotemporal gait characteristics that could increase the risk of falling in symp-HD. This experiment involved navigating around one or two ground-based obstacles under two conditions (walking while navigating around one obstacle, and walking while navigating around two obstacles). A total of 32 participants were included, 16 symp-HD and 16 healthy controls with age and sex matched. We used a GAITRite electronic walkway to examine the spatiotemporal gait characteristics and inter-trail gait variability when participants walked at their preferable speed. A minimum of six trials were completed which were performed for baseline free walk and also for each and every condition during navigating around the obstacles. For analysis, we separated all walking steps into three phases as approach steps, navigating steps and recovery steps. The mean and inter-trail variability (within participant standard deviation) for each step gait variable was calculated across the six trails. We found symp-HD individuals significantly decreased their gait velocity and step length and increased step duration variability during the navigating steps and recovery steps compared with approach steps. In contrast, HC individuals showed less difference in gait velocity, step time and step length variability from baseline in both respective conditions as well as all three approaches. These findings indicate that increasing spatiotemporal gait variability may be a possible compensatory strategy that is adopted by symp-HD individuals to effectively navigate obstacles during walking. Such findings may offer benefit to clinicians in the development of strategies for HD individuals to improve functional outcomes in the home and hospital based rehabilitation program.

Keywords: Huntington’s disease, gait variables, navigating around obstacle, basal ganglia dysfunction

Procedia PDF Downloads 443
24199 A Mathematical Model to Select Shipbrokers

Authors: Y. Smirlis, G. Koronakos, S. Plitsos

Abstract:

Shipbrokers assist the ship companies in chartering or selling and buying vessels, acting as intermediates between them and the market. They facilitate deals, providing their expertise, negotiating skills, and knowledge about ship market bargains. Their role is very important as it affects the profitability and market position of a shipping company. Due to their significant contribution, the shipping companies have to employ systematic procedures to evaluate the shipbrokers’ services in order to select the best and, consequently, to achieve the best deals. Towards this, in this paper, we consider shipbrokers as financial service providers, and we formulate the problem of evaluating and selecting shipbrokers’ services as a multi-criteria decision making (MCDM) procedure. The proposed methodology comprises a first normalization step to adjust different scales and orientations of the criteria and a second step that includes the mathematical model to evaluate the performance of the shipbrokers’ services involved in the assessment. The criteria along which the shipbrokers are assessed may refer to their size and reputation, the potential efficiency of the services, the terms and conditions imposed, the expenses (e.g., commission – brokerage), the expected time to accomplish a chartering or selling/buying task, etc. and according to our modelling approach these criteria may be assigned different importance. The mathematical programming model performs a comparative assessment and estimates for the shipbrokers involved in the evaluation, a relative score that ranks the shipbrokers in terms of their potential performance. To illustrate the proposed methodology, we present a case study in which a shipping company evaluates and selects the most suitable among a number of sale and purchase (S&P) brokers. Acknowledgment: This study is supported by the OptiShip project, implemented within the framework of the National Recovery Plan and Resilience “Greece 2.0” and funded by the European Union – NextGenerationEU programme.

Keywords: shipbrokers, multi-criteria decision making, mathematical programming, service-provider selection

Procedia PDF Downloads 89
24198 Quintic Spline Method for Variable Coefficient Fourth-Order Parabolic Partial Differential Equations

Authors: Reza Mohammadi, Mahdieh Sahebi

Abstract:

We develop a method based on polynomial quintic spline for numerical solution of fourth-order non-homogeneous parabolic partial differential equation with variable coefficient. By using polynomial quintic spline in off-step points in space and finite difference in time directions, we obtained two three level implicit methods. Stability analysis of the presented method has been carried out. We solve four test problems numerically to validate the proposed derived method. Numerical comparison with other existence methods shows the superiority of our presented scheme.

Keywords: fourth-order parabolic equation, variable coefficient, polynomial quintic spline, off-step points, stability analysis

Procedia PDF Downloads 366
24197 Prediction of Endotracheal Tube Size in Children by Predicting Subglottic Diameter Using Ultrasonographic Measurement versus Traditional Formulas

Authors: Parul Jindal, Shubhi Singh, Priya Ramakrishnan, Shailender Raghuvanshi

Abstract:

Background: Knowledge of the influence of the age of the child on laryngeal dimensions is essential for all practitioners who are dealing with paediatric airway. Choosing the correct endotracheal tube (ETT) size is a crucial step in pediatric patients because a large-sized tube may cause complications like post-extubation stridor and subglottic stenosis. On the other hand with a smaller tube, there will be increased gas flow resistance, aspiration risk, poor ventilation, inaccurate monitoring of end-tidal gases and reintubation may also be required with a different size of the tracheal tube. Recent advancement in ultrasonography (USG) techniques should now allow for accurate and descriptive evaluation of pediatric airway. Aims and objectives: This study was planned to determine the accuracy of Ultrasonography (USG) to assess the appropriate ETT size and compare it with physical indices based formulae. Methods: After obtaining approval from Institute’s Ethical and Research committee, and parental written and informed consent, the study was conducted on 100 subjects of either sex between 12-60 months of age, undergoing various elective surgeries under general anesthesia requiring endotracheal intubation. The same experienced radiologist performed ultrasonography. The transverse diameter was measured at the level of cricoids cartilage by USG. After USG, general anesthesia was administered using standard techniques followed by the institute. An experienced anesthesiologist performed the endotracheal intubations with uncuffed endotracheal tube (Portex Tracheal Tube Smiths Medical India Pvt. Ltd.) with Murphy’s eye. He was unaware of the finding of the ultrasonography. The tracheal tube was considered best fit if air leak was satisfactory at 15-20 cm H₂O of airway pressure. The obtained values were compared with the values of endotracheal tube size calculated by ultrasonography, various age, height, weight-based formulas and diameter of right and left little finger. The correlation of the size of the endotracheal tube by different modalities was done and Pearson's correlation coefficient was obtained. The comparison of the mean size of the endotracheal tube by ultrasonography and by traditional formula was done by the Friedman’s test and Wilcoxon sign-rank test. Results: The predicted tube size was equal to best fit and best determined by ultrasonography (100%) followed by comparison to left little finger (98%) and right little finger (97%) and age-based formula (95%) followed by multivariate formula (83%) and body length (81%) formula. According to Pearson`s correlation, there was a moderate correlation of best fit endotracheal tube with endotracheal tube size by age-based formula (r=0.743), body length based formula (r=0.683), right little finger based formula (r=0.587), left little finger based formula (r=0.587) and multivariate formula (r=0.741). There was a strong correlation with ultrasonography (r=0.943). Ultrasonography was the most sensitive (100%) method of prediction followed by comparison to left (98%) and right (97%) little finger and age-based formula (95%), the multivariate formula had an even lesser sensitivity (83%) whereas body length based formula was least sensitive with a sensitivity of 78%. Conclusion: USG is a reliable method of estimation of subglottic diameter and for prediction of ETT size in children.

Keywords: endotracheal intubation, pediatric airway, subglottic diameter, traditional formulas, ultrasonography

Procedia PDF Downloads 240
24196 Using RASCAL Code to Analyze the Postulated UF6 Fire Accident

Authors: J. R. Wang, Y. Chiang, W. S. Hsu, S. H. Chen, J. H. Yang, S. W. Chen, C. Shih, Y. F. Chang, Y. H. Huang, B. R. Shen

Abstract:

In this research, the RASCAL code was used to simulate and analyze the postulated UF6 fire accident which may occur in the Institute of Nuclear Energy Research (INER). There are four main steps in this research. In the first step, the UF6 data of INER were collected. In the second step, the RASCAL analysis methodology and model was established by using these data. Third, this RASCAL model was used to perform the simulation and analysis of the postulated UF6 fire accident. Three cases were simulated and analyzed in this step. Finally, the analysis results of RASCAL were compared with the hazardous levels of the chemicals. According to the compared results of three cases, Case 3 has the maximum danger in human health.

Keywords: RASCAL, UF₆, safety, hydrogen fluoride

Procedia PDF Downloads 222
24195 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 255
24194 Study on an Integrated Real-Time Sensor in Droplet-Based Microfluidics

Authors: Tien-Li Chang, Huang-Chi Huang, Zhao-Chi Chen, Wun-Yi Chen

Abstract:

The droplet-based microfluidic are used as micro-reactors for chemical and biological assays. Hence, the precise addition of reagents into the droplets is essential for this function in the scope of lab-on-a-chip applications. To obtain the characteristics (size, velocity, pressure, and frequency of production) of droplets, this study describes an integrated on-chip method of real-time signal detection. By controlling and manipulating the fluids, the flow behavior can be obtained in the droplet-based microfluidics. The detection method is used a type of infrared sensor. Through the varieties of droplets in the microfluidic devices, the real-time conditions of velocity and pressure are gained from the sensors. Here the microfluidic devices are fabricated by polydimethylsiloxane (PDMS). To measure the droplets, the signal acquisition of sensor and LabVIEW program control must be established in the microchannel devices. The devices can generate the different size droplets where the flow rate of oil phase is fixed 30 μl/hr and the flow rates of water phase range are from 20 μl/hr to 80 μl/hr. The experimental results demonstrate that the sensors are able to measure the time difference of droplets under the different velocity at the voltage from 0 V to 2 V. Consequently, the droplets are measured the fastest speed of 1.6 mm/s and related flow behaviors that can be helpful to develop and integrate the practical microfluidic applications.

Keywords: microfluidic, droplets, sensors, single detection

Procedia PDF Downloads 493
24193 A Parallel Algorithm for Solving the PFSP on the Grid

Authors: Samia Kouki

Abstract:

Solving NP-hard combinatorial optimization problems by exact search methods, such as Branch-and-Bound, may degenerate to complete enumeration. For that reason, exact approaches limit us to solve only small or moderate size problem instances, due to the exponential increase in CPU time when problem size increases. One of the most promising ways to reduce significantly the computational burden of sequential versions of Branch-and-Bound is to design parallel versions of these algorithms which employ several processors. This paper describes a parallel Branch-and-Bound algorithm called GALB for solving the classical permutation flowshop scheduling problem as well as its implementation on a Grid computing infrastructure. The experimental study of our distributed parallel algorithm gives promising results and shows clearly the benefit of the parallel paradigm to solve large-scale instances in moderate CPU time.

Keywords: grid computing, permutation flow shop problem, branch and bound, load balancing

Procedia PDF Downloads 283
24192 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone

Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay

Abstract:

Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.

Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.

Procedia PDF Downloads 147
24191 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage

Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki

Abstract:

Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.

Keywords: butterflies, shrinking body size, museum specimens, climate change

Procedia PDF Downloads 10
24190 Particle Size Effect on Shear Strength of Granular Materials in Direct Shear Test

Authors: R. Alias, A. Kasa, M. R. Taha

Abstract:

The effect of particle size on shear strength of granular materials are investigated using direct shear tests. Small direct shear test (60 mm by 60 mm by 24 mm deep) were conducted for particles passing the sieves with opening size of 2.36 mm. Meanwhile, particles passing the standard 20 mm sieves were tested using large direct shear test (300 mm by 300 mm by 200 mm deep). The large direct shear tests and the small direct shear tests carried out using the same shearing rate of 0.09 mm/min and similar normal stresses of 100, 200, and 300 kPa. The results show that the peak and residual shear strength decreases as particle size increases.

Keywords: particle size, shear strength, granular material, direct shear test

Procedia PDF Downloads 489
24189 Synthesis of Flexible Mn1-x-y(CexLay)O2-δ Ultrathin-Film Device for Highly-Stable Pseudocapacitance from end-of-life Ni-MH batteries

Authors: Samane Maroufi, Rasoul Khayyam Nekouei, Sajjad Sefimofarah, Veena Sahajwalla

Abstract:

The present work details a three-stage strategy based on selective purification of rare earth oxide (REOs) isolated from end-of-life nickel-metal hydride (Ni-MH) batteries leading to high-yield fabrication of defect-rich Mn1-x-y(CeₓLaᵧ)O2-δ film. In step one, major impurities (Fe and Al) were removed from a REE-rich solution. In step two, the resulting solution with trace content of Mn was further purified through electrodeposition which resulted in the synthesis of a non-stoichiometric Mn₋₁₋ₓ₋ᵧ(CeₓLaₓᵧ)O2-δ ultra-thin film, with controllable thicknesses (5-650 nm) and transmittance (~29-100%)in which Ce4+/3+ and La3+ ions were dissolved in MnO2-x lattice. Due to percolation impacts on the optoelectronic properties of ultrathin films, a representative Mn1-x-y(CexLay)O2-δ film with 86% transmittance exhibited an outstanding areal capacitance of 3.4 mF•cm-2, mainly attributed to the intercalation/de-intercalation of anionic O2- charge carriers through the atomic tunnels of the stratified Mn1-x-y(CexLay)O2-δ crystallites. Furthermore, the Mn1-x-y(CexLay)O2-δ exhibited excellent capacitance retention of ~90% after 16,000 cycles. Such stability was shown to be associated with intervalence charge transfers occurring among interstitial Ce/La cations and Mn oxidation states within the Mn₋₁₋ₓ₋ᵧ(CexLay)O2-δ structure. The energy and power densities of the transparent flexible Mn₋₁₋ₓ₋ᵧ(CexLay)O2-δ full-cell pseudocapacitor device with a solid-state electrolyte was measured to be 0.088 µWh.cm-2 and 843 µW.cm-2, respectively. These values showed insignificant changes under vigorous twisting and bending to 45-180˚, confirming these materials are intriguing alternatives for size-sensitive energy storage devices. In step three, the remaining solution purified further, that led to the formation of REOs (La, Ce, and Nd) nanospheres with ~40-50 nm diameter.

Keywords: spent Ni-MH batteries, green energy, flexible pseudocapacitor, rare earth elements

Procedia PDF Downloads 134
24188 MapReduce Algorithm for Geometric and Topological Information Extraction from 3D CAD Models

Authors: Ahmed Fradi

Abstract:

In a digital world in perpetual evolution and acceleration, data more and more voluminous, rich and varied, the new software solutions emerged with the Big Data phenomenon offer new opportunities to the company enabling it not only to optimize its business and to evolve its production model, but also to reorganize itself to increase competitiveness and to identify new strategic axes. Design and manufacturing industrial companies, like the others, face these challenges, data represent a major asset, provided that they know how to capture, refine, combine and analyze them. The objective of our paper is to propose a solution allowing geometric and topological information extraction from 3D CAD model (precisely STEP files) databases, with specific algorithm based on the programming paradigm MapReduce. Our proposal is the first step of our future approach to 3D CAD object retrieval.

Keywords: Big Data, MapReduce, 3D object retrieval, CAD, STEP format

Procedia PDF Downloads 541
24187 Health Impacts of Size Segregated Particulate Matter and Black Carbon in Industrial Area of Firozabad

Authors: Kalpana Rajouriya, Ajay Taneja

Abstract:

Particulates are ubiquitous in the air environment and cause serious threats to human beings, such as lung cancer, Chronic obstructive pulmonary disease (COPD), and Asthma. Particulates mainly arise from industrial effluent, vehicular emission, and other anthropogenic activities. In the glass industrial city Firozabad, real-time monitoring (mass as well as a number) of size segregated Particulate Matter (PM) and black carbon was done by Aerosol Black Carbon Detector (ABCD) and GRIMM portable aerosol Spectrometer at two different sites in which one site is urban, and another is rural. The average mass concentration of size segregated PM during the study period (March & April 2022) was recorded as PM₁₀ (223.73 g/m-³), PM₅.₀ (44.955 g/m-³), PM₂.₅ (59.275 g/m-³), PM₁.₀ (33.02 g/m-³), PM₀.₅ (2.05 g/m-³), and PM₀.₂₅ (2.99 g/m- ³). In number mode, PM concentration was found as PM₁₀ (27.46g/m-³), PM₅.₀ (233.48g/m-³), PM₂.₅ (646.61g/m-³), PM₁.₀ (1134.94 g/m-³), PM₀.₅ (14056.04g/m-³), and PM₀.₂₅ (182906.4 g/m-³). The highest concentration of BC was found in Urban due to the emissions from diesel engines and wood burning while NO2 was highest at the rural sites. The concentrations of PM₁₀ and PM₂.₅ exceeded the NAAQS and WHO guidelines. The sensitive, exposed population may be at risk of developing health-related problems from exposure to size-segregated PM and BC.

Keywords: particulate matter, black carbon, NO2, health risk

Procedia PDF Downloads 39