Search results for: cost efficient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9957

Search results for: cost efficient

1707 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 60
1706 Cupric Oxide Thin Films for Optoelectronic Application

Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch

Abstract:

Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.

Keywords: absorber material, cupric oxide, dip coating, thin film

Procedia PDF Downloads 298
1705 Determinants of Hospital Obstetric Unit Closures in the United States 2002-2013: Loss of Hospital Obstetric Care 2002-2013

Authors: Peiyin Hung, Katy Kozhimannil, Michelle Casey, Ira Moscovice

Abstract:

Background/Objective: The loss of obstetric services has been a pressing concern in urban and rural areas nationwide. This study aims to determine factors that contribute to the loss of obstetric care through closures of a hospital or obstetric unit. Methods: Data from 2002-2013 American Hospital Association annual surveys were used to identify hospitals providing obstetric services. We linked these data to Medicare Healthcare Cost Report Information for hospital financial indicators, the US Census Bureau’s American Community Survey for zip-code level characteristics, and Area Health Resource files for county- level clinician supply measures. A discrete-time multinomial logit model was used to determine contributing factors to obstetric unit or hospital closures. Results: Of 3,551 hospitals providing obstetrics services during 2002-2013, 82% kept units open, 12% stopped providing obstetrics services, and 6% closed down completely. State-level variations existed. Factors that significantly increased hospitals’ probability of obstetric unit closures included lower than 250 annual birth volume (adjusted marginal effects [95% confidence interval]=34.1% [28%, 40%]), closer proximity to another hospital with obstetric services (per 10 miles: -1.5% [-2.4, -0.5%]), being in a county with lower family physician supply (-7.8% [-15.0%, -0.6%), being in a zip code with higher percentage of non-white females (per 10%: 10.2% [2.1%, 18.3%]), and with lower income (per $1,000 income: -0.14% [-0.28%, -0.01%]). Conclusions: Over the past 12 years, loss of obstetric services has disproportionately affected areas served by low-volume urban and rural hospitals, non-white and low-income communities, and counties with fewer family physicians, signaling a need to address maternity care access in these communities.

Keywords: access to care, obstetric care, service line discontinuation, hospital, obstetric unit closures

Procedia PDF Downloads 208
1704 Development of Ferric Citrate Complex Draw Solute and Its Application for Liquid Product Enrichment through Forward Osmosis

Authors: H. Li, L. Ji, J. Su

Abstract:

Forward osmosis is an emerging technology for separation and has great potential in the concentration of liquid products such as protein, pharmaceutical, and natural products. In pharmacy industry, one of the very tough talks is to concentrate the product in a gentle way since some of the key components may lose bioactivity when exposed to heating or pressurization. Therefore, forward osmosis (FO), which uses inherently existed osmosis pressure instead of externally applied hydraulic pressure, is attractive for pharmaceutical enrichments in a much efficient and energy-saving way. Recently, coordination complexes have been explored as the new class of draw solutes in FO processes due to their bulky configuration and excellent performance in terms of high water flux and low reverse solute flux. Among these coordination complexes, ferric citrate complex with lots of hydrophilic groups and ionic species which make them good solubility and high osmotic pressure in aqueous solution, as well as its low toxicity, has received much attention. However, the chemistry of ferric complexation by citrate is complicated, and disagreement prevails in the literature, especially for the structure of the ferric citrate. In this study, we investigated the chemical reaction with various molar ratio of iron and citrate. It was observed that the ferric citrate complex (Fe-CA2) with molar ratio of 1:1 for iron and citrate formed at the beginning of the reaction, then Fecit would convert to ferric citrate complex at the molar ratio of 1:2 with the proper excess of citrate in the base solution. The structures of the ferric citrate complexes synthesized were systematically characterized by X-ray diffraction (XRD), UV-vis spectroscopy, X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FT-IR) and Thermogravimetric analysis (TGA). Fe-CA2 solutions exhibit osmotic pressures more than twice of that for NaCl solutions at the same concentrations. Higher osmotic pressure means higher driving force, and this is preferable for the FO process. Fe-CA2 and NaCl draw solutions were prepared with the same osmotic pressure and used in FO process for BSA protein concentration. Within 180 min, BSA concentration was enriched from 0.2 to 0.27 L using Fe-CA draw solutions. However, it was only increased from 0.20 to 0.22 g/L using NaCl draw solutions. A reverse flux of 11 g/m²h was observed for NaCl draw solutes while it was only 0.1 g/m²h for Fe-CA2 draw solutes. It is safe to conclude that Fe-CA2 is much better than NaCl as draw solute and it is suitable for the enrichment of liquid product.

Keywords: draw solutes, ferric citrate complex, forward osmosis, protein enrichment

Procedia PDF Downloads 141
1703 Environmental Performance Measurement for Network-Level Pavement Management

Authors: Jessica Achebe, Susan Tighe

Abstract:

The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.

Keywords: pavement management, sustainability, network-level evaluation, environment measures

Procedia PDF Downloads 197
1702 A Lightweight Blockchain: Enhancing Internet of Things Driven Smart Buildings Scalability and Access Control Using Intelligent Direct Acyclic Graph Architecture and Smart Contracts

Authors: Syed Irfan Raza Naqvi, Zheng Jiangbin, Ahmad Moshin, Pervez Akhter

Abstract:

Currently, the IoT system depends on a centralized client-servant architecture that causes various scalability and privacy vulnerabilities. Distributed ledger technology (DLT) introduces a set of opportunities for the IoT, which leads to practical ideas for existing components at all levels of existing architectures. Blockchain Technology (BCT) appears to be one approach to solving several IoT problems, like Bitcoin (BTC) and Ethereum, which offer multiple possibilities. Besides, IoTs are resource-constrained devices with insufficient capacity and computational overhead to process blockchain consensus mechanisms; the traditional BCT existing challenge for IoTs is poor scalability, energy efficiency, and transaction fees. IOTA is a distributed ledger based on Direct Acyclic Graph (DAG) that ensures M2M micro-transactions are free of charge. IOTA has the potential to address existing IoT-related difficulties such as infrastructure scalability, privacy and access control mechanisms. We proposed an architecture, SLDBI: A Scalable, lightweight DAG-based Blockchain Design for Intelligent IoT Systems, which adapts the DAG base Tangle and implements a lightweight message data model to address the IoT limitations. It enables the smooth integration of new IoT devices into a variety of apps. SLDBI enables comprehensive access control, energy efficiency, and scalability in IoT ecosystems by utilizing the Masked Authentication Message (MAM) protocol and the IOTA Smart Contract Protocol (ISCP). Furthermore, we suggest proof-of-work (PoW) computation on the full node in an energy-efficient way. Experiments have been carried out to show the capability of a tangle to achieve better scalability while maintaining energy efficiency. The findings show user access control management at granularity levels and ensure scale up to massive networks with thousands of IoT nodes, such as Smart Connected Buildings (SCBDs).

Keywords: blockchain, IOT, direct acyclic graphy, scalability, access control, architecture, smart contract, smart connected buildings

Procedia PDF Downloads 106
1701 Mapping the Core Processes and Identifying Actors along with Their Roles, Functions and Linkages in Trout Value Chain in Kashmir, India

Authors: Stanzin Gawa, Nalini Ranjan Kumar, Gohar Bilal Wani, Vinay Maruti Hatte, A. Vinay

Abstract:

Rainbow trout (Oncorhynchus mykiss) and Brown trout (Salmo trutta fario) are the two species of trout which were once introduced by British in waters of Kashmir has well adapted to favorable climatic conditions. Cold water fisheries are one of the emerging sectors in Kashmir valley and trout holds an important place Jammu and Kashmir fisheries. Realizing the immense potential of trout culture in Kashmir region, the state fisheries department started privatizing trout culture under the centrally funded scheme of RKVY in which they provide 80 percent subsidy for raceway construction and supply of feed and seed for the first year since 2009-10 and at present there are 362 private trout farms. To cater the growing demand for trout in the valley, it is important to understand the bottlenecks faced in the propagation of trout culture. Value chain analysis provides a generic framework to understand the various activities and processes, mapping and studying linkages is first step that needs to be done in any value chain analysis. In Kashmir, it is found that trout hatcheries play a crucial role in insuring the continuous supply of trout seed in valley. Feed is most limiting factor in trout culture and the farmer has to incur high cost in payment and in the transportation of feed from the feed mill to farm. Lack of aqua clinic in the Kashmir valley needs to be addressed. Brood stock maintenance, breeding and seed production, technical assistance to private farmer, extension services have to be strengthened and there is need to development healthier environment for new entrepreneurs. It was found that trout farmers do not avail credit facility as there is no well define credit scheme for fisheries in the state. The study showed weak institutional linkages. Research and development should focus more on applied science rather than basic science.

Keywords: trout, Kashmir, value chain, linkages, culture

Procedia PDF Downloads 391
1700 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 49
1699 Utilizing Dowel-Laminated Mass Timber Components in Residential Multifamily Structures: A Case Study

Authors: Theodore Panton

Abstract:

As cities in the United States experience critical housing shortages, mass timber presents the opportunity to address this crisis in housing supply while taking advantage of the carbon-positive benefits of sustainably forested wood fiber. Mass timber, however, currently has a low level of adoption in residential multifamily structures due to the risk-averse nature of change within the construction financing, Architecture / Engineering / Contracting (AEC) communities, as well as various agency approval challenges. This study demonstrates how mass timber can be used within the cost and feasibility parameters of a typical multistory residential structure and ultimately address the need for dense urban housing. This study will utilize The Garden District, a mixed-use market-rate housing project in Woodinville, Washington, as a case study to illuminate the potential of mass timber in this application. The Garden District is currently in final stages of permit approval and will commence construction in 2023. It will be the tallest dowel-laminated timber (DLT) residential structure in the United States when completed. This case study includes economic, technical, and design reference points to demonstrate the relevance of the use of this system and its ability to deliver “triple bottom line” results. In terms of results, the study establishes scalable and repeatable approaches to project design and delivery of mass timber in multifamily residential uses and includes economic data, technical solutions, and a summary of end-user advantages. This study discusses the third party tested systems for satisfying acoustical requirements within dwelling units, a key to resolving the use of mass timber within multistory residential use. Lastly, the study will also compare the mass timber solution with a comparable cold formed steel (CFS) system with a similar program, which indicates a net carbon savings of over three million tons over the life cycle of the building.

Keywords: DLT, dowell laminated timber, mass timber, market rate multifamily

Procedia PDF Downloads 104
1698 A Case Study: Social Network Analysis of Construction Design Teams

Authors: Elif D. Oguz Erkal, David Krackhardt, Erica Cochran-Hameen

Abstract:

Even though social network analysis (SNA) is an abundantly studied concept for many organizations and industries, a clear SNA approach to the project teams has not yet been adopted by the construction industry. The main challenges for performing SNA in construction and the apparent reason for this gap is the unique and complex structure of each construction project, the comparatively high circulation of project team members/contributing parties and the variety of authentic problems for each project. Additionally, there are stakeholders from a variety of professional backgrounds collaborating in a high-stress environment fueled by time and cost constraints. Within this case study on Project RE, a design & build project performed at the Urban Design Build Studio of Carnegie Mellon University, social network analysis of the project design team will be performed with the main goal of applying social network theory to construction project environments. The research objective is to determine a correlation between the network of how individuals relate to each other on one’s perception of their own professional strengths and weaknesses and the communication patterns within the team and the group dynamics. Data is collected through a survey performed over four rounds conducted monthly, detailed follow-up interviews and constant observations to assess the natural alteration in the network with the effect of time. The data collected is processed by the means of network analytics and in the light of the qualitative data collected with observations and individual interviews. This paper presents the full ethnography of this construction design team of fourteen architecture students based on an elaborate social network data analysis over time. This study is expected to be used as an initial step to perform a refined, targeted and large-scale social network data collection in construction projects in order to deduce the impacts of social networks on project performance and suggest better collaboration structures for construction project teams henceforth.

Keywords: construction design teams, construction project management, social network analysis, team collaboration, network analytics

Procedia PDF Downloads 187
1697 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 117
1696 Effects of Artificial Nectar Feeders on Bird Distribution and Erica Visitation Rate in the Cape Fynbos

Authors: Monique Du Plessis, Anina Coetzee, Colleen L. Seymour, Claire N. Spottiswoode

Abstract:

Artificial nectar feeders are used to attract nectarivorous birds to gardens and are increasing in popularity. The costs and benefits of these feeders remain controversial, however. Nectar feeders may have positive effects by attracting nectarivorous birds towards suburbia, facilitating their urban adaptation, and supplementing bird diets when floral resources are scarce. However, this may come at the cost of luring them away from the plants they pollinate in neighboring indigenous vegetation. This study investigated the effect of nectar feeders on an African pollinator-plant mutualism. Given that birds are important pollinators to many fynbos plant species, this study was conducted in gardens and natural vegetation along the urban edge of the Cape Peninsula. Feeding experiments were carried out to compare relative bird abundance and local distribution patterns for nectarivorous birds (i.e., sunbirds and sugarbirds) between feeder and control treatments. Resultant changes in their visitation rates to Erica flowers in the natural vegetation were tested by inspection of their anther ring status. Nectar feeders attracted higher densities of nectarivores to gardens relative to natural vegetation and decreased their densities in the neighboring fynbos, even when floral abundance in the neighboring vegetation was high. The consequent changes to their distribution patterns and foraging behavior decreased their visitation to at least Erica plukenetii flowers (but not to Erica abietina). This study provides evidence that nectar feeders may have positive effects for birds themselves by reducing their urban sensitivity but also highlights the unintended negative effects feeders may have on the surrounding fynbos ecosystem. Given that nectar feeders appear to compete with the flowers of Erica plukenetii, and perhaps those of other Erica species, artificial feeding may inadvertently threaten bird-plant pollination networks.

Keywords: avian nectarivores, bird feeders, bird pollination, indirect effects in human-wildlife interactions, sugar water feeders, supplementary feeding

Procedia PDF Downloads 136
1695 The Integration of Geographical Information Systems and Capacitated Vehicle Routing Problem with Simulated Demand for Humanitarian Logistics in Tsunami-Prone Area: A Case Study of Phuket, Thailand

Authors: Kiatkulchai Jitt-Aer, Graham Wall, Dylan Jones

Abstract:

As a result of the Indian Ocean tsunami in 2004, logistics applied to disaster relief operations has received great attention in the humanitarian sector. As learned from such disaster, preparing and responding to the aspect of delivering essential items from distribution centres to affected locations are of the importance for relief operations as the nature of disasters is uncertain especially in suffering figures, which are normally proportional to quantity of supplies. Thus, this study proposes a spatial decision support system (SDSS) for humanitarian logistics by integrating Geographical Information Systems (GIS) and the capacitated vehicle routing problem (CVRP). The GIS is utilised for acquiring demands simulated from the tsunami flooding model of the affected area in the first stage, and visualising the simulation solutions in the last stage. While CVRP in this study encompasses designing the relief routes of a set of homogeneous vehicles from a relief centre to a set of geographically distributed evacuation points in which their demands are estimated by using both simulation and randomisation techniques. The CVRP is modeled as a multi-objective optimization problem where both total travelling distance and total transport resources used are minimized, while demand-cost efficiency of each route is maximized in order to determine route priority. As the model is a NP-hard combinatorial optimization problem, the Clarke and Wright Saving heuristics is proposed to solve the problem for the near-optimal solutions. The real-case instances in the coastal area of Phuket, Thailand are studied to perform the SDSS that allows a decision maker to visually analyse the simulation scenarios through different decision factors.

Keywords: demand simulation, humanitarian logistics, geographical information systems, relief operations, capacitated vehicle routing problem

Procedia PDF Downloads 235
1694 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment

Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz

Abstract:

Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.

Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow

Procedia PDF Downloads 168
1693 Modeling of Void Formation in 3D Woven Fabric During Resin Transfer Moulding

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Jan Kočí, Andy Long

Abstract:

Resin transfer molding (RTM) is increasingly used for manufacturing high-quality composite structures due to its additional advantages over prepregs of low-cost out-of-autoclave processing. However, to retain the advantages, it is critical to reduce the void content during the injection. Reinforcements commonly used in RTM, such as woven fabrics, have dual-scale porosity with mesoscale pores between the yarns and the micro-scale pores within the yarns. Due to the fabric geometry and the nature of the dual-scale flow, the flow front during injection creates a complicated fingering formation which leads to void formation. Analytical modeling of void formation for woven fabrics has been widely studied elsewhere. However, there is scope for improvement to the reduction in void formation in 3D fabrics wherein the in-plane yarn layers are confined by additional through-thickness binder yarns. In the present study, the structural morphology of the tortuous pore spaces in the 3D fabric has been studied and implemented using open-source software TexGen. An analytical model for the void and the fingering formation has been implemented based on an idealized unit cell model of the 3D fabric. Since the pore spaces between the yarns are free domains, the region is treated as flow-through connected channels, whereas intra-yarn flow has been modeled using Darcy’s law with an additional term to account for capillary pressure. Later the void fraction has been characterised using the criterion of void formation by comparing the fill time for inter and intra yarn flow. Moreover, the dual-scale two-phase flow of resin with air has been simulated in the commercial CFD solver OpenFOAM/ANSYS to predict the probable location of voids and validate the analytical model. The use of an idealised unit cell model will give the insight to optimise the mesoscale geometry of the reinforcement and injection parameters to minimise the void content during the LCM process.

Keywords: 3D fiber, void formation, RTM, process modelling

Procedia PDF Downloads 82
1692 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 451
1691 Sizing of Drying Processes to Optimize Conservation of the Nuclear Power Plants on Stationary

Authors: Assabo Mohamed, Bile Mohamed, Ali Farah, Isman Souleiman, Olga Alos Ramos, Marie Cadet

Abstract:

The life of a nuclear power plant is regularly punctuated by short or long period outages to carry out maintenance operations and/or nuclear fuel reloading. During these stops periods, it is essential to conserve all the secondary circuit equipment to avoid corrosion priming. This kind of circuit is one of the main components of a nuclear reactor. Indeed, the conservation materials on shutdown of a nuclear unit improve circuit performance and reduce the maintenance cost considerably. This study is a part of the optimization of the dry preservation of equipment from the water station of the nuclear reactor. The main objective is to provide tools to guide Electricity Production Nuclear Centre (EPNC) in order to achieve the criteria required by the chemical specifications of conservation materials. A theoretical model of drying exchangers of water station is developed by the software Engineering Equation Solver (EES). It used to size requirements and air quality needed for dry conservation of equipment. This model is based on heat transfer and mass transfer governing the drying operation. A parametric study is conducted to know the influence of aerothermal factor taking part in the drying operation. The results show that the success of dry conservation of equipment of the secondary circuit of nuclear reactor depends strongly on the draining, the quality of drying air and the flow of air injecting in the secondary circuit. Finally, theoretical case study performed on EES highlights the importance of mastering the entire system to balance the air system to provide each exchanger optimum flow depending on its characteristics. From these results, recommendations to nuclear power plants can be formulated to optimize drying practices and achieve good performance in the conservation of material from the water at the stop position.

Keywords: dry conservation, optimization, sizing, water station

Procedia PDF Downloads 253
1690 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.

Keywords: fire prediction, drone, smoke toxicity, analyser, fire management

Procedia PDF Downloads 72
1689 In situ Immobilization of Mercury in a Contaminated Calcareous Soil Using Water Treatment Residual Nanoparticles

Authors: Elsayed A. Elkhatib, Ahmed M. Mahdy, Mohamed L. Moharem, Mohamed O. Mesalem

Abstract:

Mercury (Hg) is one of the most toxic and bio-accumulative heavy metal in the environment. However, cheap and effective in situ remediation technology is lacking. In this study, the effects of water treatment residuals nanoparticles (nWTR) on mobility, fractionation and speciation of mercury in an arid zone soil from Egypt were evaluated. Water treatment residual nanoparticles with high surface area (129 m 2 g-1) were prepared using Fritsch planetary mono mill. Scanning and transmission electron microscopy revealed that the nanoparticles of WTR nanoparticles are spherical in shape, and single particle sizes are in the range of 45 to 96 nm. The x-ray diffraction (XRD) results ascertained that amorphous iron, aluminum (hydr)oxides and silicon oxide dominating all nWTR, with no apparent crystalline iron–Al (hydr)oxides. Addition of nWTR, greatly increased the Hg sorption capacities of studied soils and greatly reduced the cumulative Hg released from the soils. Application of nWTR at 0.10 and 0.30 % rates reduced the released Hg from the soil by 50 and 85 % respectively. The power function and first order kinetics models well described the desorption process from soils and nWTR amended soils as evidenced by high coefficient of determination (R2) and low SE values. Application of nWTR greatly increased the association of Hg with the residual fraction. Meanwhile, application of nWTR at a rate of 0.3% greatly increased the association of Hg with the residual fraction (>93%) and significantly increased the most stable Hg species (Hg(OH)2 amor) which in turn enhanced Hg immobilization in the studied soils. Fourier transmission infrared spectroscopy analysis indicated the involvement of nWTR in the retention of Hg (II) through OH groups which suggest inner-sphere adsorption of Hg ions to surface functional groups on nWTR. These results demonstrated the feasibility of using a low-cost nWTR as best management practice to immobilize excess Hg in contaminated soils.

Keywords: release kinetics, Fourier transmission infrared spectroscopy, Hg fractionation, Hg species

Procedia PDF Downloads 219
1688 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 258
1687 Interstellar Mission to Wolf 359: Possibilities for the Future

Authors: Rajasekar Anand Thiyagarajan

Abstract:

One of the driving forces of mankind is the “le r`eve d'etoiles" or the “dream of stars", which has been the dynamo of our civilization. Since the beginning of the dawn of the civilization, mankind has looked upon the heavens with wonder and he has tried to understand the meaning of those twinkling lights. As human history has progressed, the understanding of those twinkling lights has progressed, as we now know a lot of information about stars. However, the dream of stars or the dream of reaching those stars always remains within the expectations of mankind. In fact, the needs of the civilization constantly drive for better knowledge and the capability of reaching those stars is one such way that knowledge and exultation can be achieved. This paper takes a futuristic case study of an interstellar mission to Wolf 359, which is approximately 8.3 light years away from us. In terms of galactic distances, 8.3 light years is not much, but as far as present space technology capabilities are concerned, it is next to impossible for us to reach those distances. Several studies have been conducted on various missions to Alpha Centauri and other nearby stars such as Barnard's star and Wolf 359. However, taking a more distant star such as Wolf 359 will help test the mankind's drive for interstellar exploration, as exotic means of travel are needed. This paper will take a futuristic case study of the event and various possibilities of space travel will be discussed in detail. Comprehensive tables and graphs will be given, which will depict the amount of time that will pass at each mode of travel and more importantly some idea on the cost in terms of energy as well as money will be discussed within today's context. In addition, prerequisites to an interstellar mission to Wolf 359 will be given in detail as well as a sample mission which will take place to that particular destination. Even though the possibility of such a mission is probably nonexistent for the 21st century, it is essential to do these exercises so that mankind's understanding of the universe will be increased. In addition, this paper hopes to establish some general guidelines for such an interstellar mission.

Keywords: wolf 359, interstellar mission, alpha centauri, core diameter, core length, reflector thickness enrichment, gas temperature, reflector temperature, power density, mass of the space craft, acceleration of the space craft, time expansion

Procedia PDF Downloads 410
1686 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 147
1685 Ultrasound as an Aid to Predict the Onset of Leaking in Dengue Haemorrhagic Fever: Experience of a Dengue Treatment Facility in South Asia

Authors: Hasn Perera, Is Almeida, Hnk Perera, Mzf Mohammed, Ade Silva, H. Wijesinghe, Ajal Fernando

Abstract:

Introduction: Dengue is a major Public Health burden of two clinical entities, Dengue Fever & Dengue Haemorrhagic Fever (DHF). The vast majority of dengue deaths occur in DHF patients, where the diagnosis hinges on the presence of fluid leakage. Limited Ultrasound Scans (USS) of chest and abdomen are used widely at Centre for Clinical Management of Dengue & Dengue Haemorrhagic Fever (CCMDDHF), as the primary method for detecting fluid leaking in DHF. This study analyses the relationship between haematological and USS findings at the onset of leaking and to further determine the usefulness of ultrasound in diagnosing DHF. Methods: A prospective analysis of 80 serologically confirmed dengue patients initially admitted to a General Medical and Paediatric wards who were subsequently transferred to the CCMDDHF from March to September 2017 were analysed. In addition to repeated blood counts and capillary haematocrits’, serial USS were done to detect the onset fluid leaking by three competent and experienced doctors at CCMDDHF. Results: 80 patients (male: female: 38:42) with a mean age of 20 years (SD ±16.8, range 3-74) were evaluated. Dropping of platelet counts below 100,000 and haematocrit rise towards 20% started 4±1.3 day of fever with a mean platelet value of 69x103(range17-98x103). Gallbladder wall thickening was the commonest (98.7%) USS finding followed by fluid in hepato-renal pouch (95%), pelvic fluid (58.7%), right-sided pleural effusion (35%), bilateral effusions (7.5%). USS evidence of plasma leakage was detected in 11.25 %( n=9) of DHF cases from 1 day before significant haematocrit rise was noted. 35 (43.7%) patients with lowering platelets and haematocrit rise showed no objective evidence of plasma leaking on ultrasound scan. Conclusion: This outbreak underscores the importance of USS as a useful, sensitive and cost-effective tool for early diagnosis of suspected DHF cases, facilitating the tracking of progress of leaking and management of epidemics.

Keywords: dengue, ultrasound, plasma leaking, South Asia

Procedia PDF Downloads 214
1684 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.

Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)

Procedia PDF Downloads 119
1683 Marine Ecosystem Mapping of Taman Laut Labuan: The First Habitat Mapping Effort to Support Marine Parks Management in Malaysia

Authors: K. Ismail, A. Ali, R. C. Hasan, I. Khalil, Z. Bachok, N. M. Said, A. M. Muslim, M. S. Che Din, W. S. Chong

Abstract:

The marine ecosystem in Malaysia holds invaluable potential in terms of economics, food security, pharmaceuticals components and protection from natural hazards. Although exploration of oil and gas industry and fisheries are active within Malaysian waters, knowledge of the seascape and ecological functioning of benthic habitats is still extremely poor in the marine parks around Malaysia due to the lack of detailed seafloor information. Consequently, it is difficult to manage marine resources effectively, protect ecologically important areas and set legislation to safeguard the marine parks. The limited baseline data hinders scientific linkage to support effective marine spatial management in Malaysia. This became the main driver behind the first seabed mapping effort at the national level. Taman Laut Labuan (TLL) is located to the west coast of Sabah and to the east of South China Sea. The total area of TLL is approximately 158.15 km2, comprises of three islands namely Pulau Kuraman, Rusukan Besar and Rusukan Kecil and is characterised by shallow fringing reef with few submerged shallow reef. The unfamiliar rocky shorelines limit the survey of multibeam echosounder to area with depth more than 10 m. Whereas, singlebeam and side scan sonar systems were used to acquire the data for area with depth less than 10 m. By integrating data from multibeam bathymetry and backscatter with singlebeam bathymetry and side sonar images, we produce a substrate map and coral coverage map for the TLL using i) marine landscape mapping technique and ii) RSOBIA ArcGIS toolbar (developed by T. Le Bas). We take the initiative to explore the ability of aerial drone and satellite image (WorldView-3) to derive the depths and substrate type within the intertidal and subtidal zone where it is not accessible via acoustic mapping. Although the coverage was limited, the outcome showed a promising technique to be incorporated towards establishing a guideline to facilitate a standard practice for efficient marine spatial management in Malaysia.

Keywords: habitat mapping, marine spatial management, South China Sea, National seabed mapping

Procedia PDF Downloads 206
1682 Molecular Detection of Acute Virus Infection in Children Hospitalized with Diarrhea in North India during 2014-2016

Authors: Ali Ilter Akdag, Pratima Ray

Abstract:

Background:This acute gastroenteritis viruses such as rotavirus, astrovirus, and adenovirus are mainly responsible for diarrhea in children below < 5 years old. Molecular detection of these viruses is crucially important to the understand development of the effective cure. This study aimed to determine the prevalence of common these viruses in children < 5 years old presented with diarrhea from Lala Lajpat Rai Memorial Medical College (LLRM) centre (Meerut) North India, India Methods: Total 312 fecal samples were collected from diarrheal children duration 3 years: in year 2014 (n = 118), 2015 (n = 128) and 2016 (n = 66) ,< 5 years of age who presented with acute diarrhea at the Lala Lajpat Rai Memorial Medical College (LLRM) centre(Meerut) North India, India. All samples were the first detection by EIA/RT-PCR for rotaviruses, adenovirus and astrovirus. Results: In 312 samples from children with acute diarrhea in sample viral agent was found, rotavirus A was the most frequent virus identified (57 cases; 18.2%), followed by Astrovirus in 28 cases (8.9%), adenovirus in 21 cases (6.7%). Mixed infections were found in 14 cases, all of which presented with acute diarrhea (14/312; 4.48%). Conclusions: These viruses are a major cause of diarrhea in children <5 years old in North India. Rotavirus A is the most common etiological agent, follow by astrovirus. This surveillance is important to vaccine development of the entire population. There is variation detection of virus year wise due to differences in the season of sampling, method of sampling, hygiene condition, socioeconomic level of the entire people, enrolment criteria, and virus detection methods. It was found Astrovirus higher then Rotavirus in 2015, but overall three years study Rotavirus A is mainly responsible for causing severe diarrhea in children <5 years old in North India. It emphasizes the required for cost-effective diagnostic assays for Rotaviruses which would help to determine the disease burden.

Keywords: adenovirus, Astrovirus, hospitalized children, Rotavirus

Procedia PDF Downloads 124
1681 Immiscible Polymer Blends with Controlled Nanoparticle Location for Excellent Microwave Absorption: A Compartmentalized Approach

Authors: Sourav Biswas, Goutam Prasanna Kar, Suryasarathi Bose

Abstract:

In order to obtain better materials, control in the precise location of nanoparticles is indispensable. It was shown here that ordered arrangement of nanoparticles, possessing different characteristics (electrical/magnetic dipoles), in the blend structure can result in excellent microwave absorption. This is manifested from a high reflection loss of ca. -67 dB for the best blend structure designed here. To attenuate electromagnetic radiations, the key parameters i.e. high electrical conductivity and large dielectric/magnetic loss are targeted here using a conducting inclusion [multiwall carbon nanotubes, MWNTs]; ferroelectric nanostructured material with associated relaxations in the GHz frequency [barium titanate, BT]; and a loss ferromagnetic nanoparticles [nickel ferrite, NF]. In this study, bi-continuous structures were designed using 50/50 (by wt) blends of polycarbonate (PC) and polyvinylidene fluoride (PVDF). The MWNTs was modified using an electron acceptor molecule; a derivative of perylenediimide, which facilitates π-π stacking with the nanotubes and stimulates efficient charge transport in the blends. The nanoscopic materials have specific affinity towards the PVDF phase. Hence, by introducing surface-active groups, ordered arrangement can be tailored. To accomplish this, both BT and NF was first hydroxylated followed by introducing amine-terminal groups on the surface. The latter facilitated in nucleophilic substitution reaction with PC and resulted in their precise location. In this study, we have shown for the first time that by compartmentalized approach, superior EM attenuation can be achieved. For instance, when the nanoparticles were localized exclusively in the PVDF phase or in both the phases, the minimum reflection loss was ca. -18 dB (for MWNT/BT mixture) and -29 dB (for MWNT/NF mixture), and the shielding was primarily through reflection. Interestingly, by adopting the compartmentalized approach where in, the lossy materials were in the PC phase and the conducting inclusion (MWNT) in PVDF, an outstanding reflection loss of ca. -57 dB (for BT and MWNT combination) and -67 dB (for NF and MWNT combination) was noted and the shielding was primarily through absorption. Thus, the approach demonstrates that nanoscopic structuring in the blends can be achieved under macroscopic processing conditions and this strategy can further be explored to design microwave absorbers.

Keywords: barium titanate, EMI shielding, MWNTs, nickel ferrite

Procedia PDF Downloads 432
1680 An Evaluation and Guidance for mHealth Apps

Authors: Tareq Aljaber

Abstract:

The number of mobile health apps is growing at a fast frequency as it's nearly doubled in a year between 2015 and 2016. Though, there is a lack of an effective evaluation framework to verify the usability and reliability of mobile phone health education applications which would help saving time and effort for the numerous user groups. This abstract describing a framework for evaluating mobile applications in specifically mobile health education applications, along with a guidance select tool to assist different users to select the most suitable mobile health education apps. The effective framework outcome is intended to meet the requirements and needs of the different stakeholder groups additionally to enhancing the development of mobile health education applications with software engineering approaches, by producing new and more effective techniques to evaluate such software. This abstract highlights the significance and consequences of mobile health education apps, before focusing the light on the required to create an effective evaluation framework for these apps. An explanation of the effective evaluation framework is going to be delivered in the abstract, beside with some specific evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) metrics to enable the determination of the usefulness and usability of health education mobile apps. Moreover, an explanation of the qualitative and quantitative outcomes for the effective evaluation framework was accomplished using Epocrates mobile phone app in addition to some other mobile phone apps. This proposed framework-An Evaluation Framework for Mobile Health Education Apps-consists of a hybrid of 5 metrics designated from a larger set in usability evaluation and heuristic evaluation, illuminated grounded on 15 unstructured interviews from software developers (SD), health professionals (HP) and patients (P). These five metrics corresponding to explicit facets of usability recognised through a requirements analysis of typical stakeholders of mobile health apps. These five hybrid selected metrics were scattered across 24 specific questionnaire questions, which are available on request from first author. This questionnaire has been sent to 81 participants distributed in three sets of stakeholders from software developers (SD), health professionals (HP) and patients/general users (P/GU) on the purpose of ranking three sets of mobile health education applications. Finally, the outcomes from the questionnaire data helped us to approach our aims which are finding the profile for different stakeholders, finding the profile for different mobile health educations application packages, ranking different mobile health education application and guide us to build the select guidance too which is apart from the Evaluation Framework for Mobile Health Education Apps.

Keywords: evaluation framework, heuristic evaluation, usability evaluation, metrics

Procedia PDF Downloads 389
1679 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 266
1678 Phenotypic Characterization of Desi Naked Neck Chicken and Its Association with Insulin-Like Growth Factor-I (IGF-I) Gene Polymorphism in Pakistan

Authors: Akbar Nawaz Khan, Abdul Ghaffar, Muhammad Naeem Riaz

Abstract:

The study was conducted to investigate the phenotypic features, morphometry and production potentialities of indigenous naked neck chicken (NN) of Pakistan under intensive management condition. A total of 35 NN chicks were randomly selected, and the experiment was performed at Poultry and wildlife research section NARC Islamabad for a period of 22 weeks. The predominant plumage color was black and golden while skin color was observed white. The average shank length, leg length, thigh length, keel length, chest breadth, head width, wing space, wing length, body length, body girth, body height and pubic bone width in adult males and females were 69.19 ± 3.34mm, 117.93 ± 4.42mm, 117.93 ± 4.42mm, 90.87 ± 6.53mm, 95.03 ± 4.56mm, 49.77 ± 2.53mm, 30.63 ± 1.50cm, 27.24 ± 2.71cm, 18.88 ± 0.65cm, 17.77 ± 1.01cm, 25.96 ± 0.56cm, 47.81 ± 1.41cm and 35.69 ± 4.09mm respectively. The average age and live body weight of NN chicken at sexual maturity were recorded as 165.85 days and 1269.38 g. While hen-day egg production of NN was recorded as 45%. The present study was aimed to investigate the existence of polymorphism at IGF-I gene in indigenous naked neck chicken through PCR based Restriction Fragment Length Polymorphism. Based on restriction analysis using Hinf I restriction enzyme, three genotypes were detected designated as AA, AC, and CC. Restriction analysis of PCR amplified product showed the presence of DNA fragments of 622, 378, 244 and 191, (genotypes). The PCR-RFLP analysis is easy, cost effective method which permits the easy characterization of IGF-I gene. This showed the investigated IGF-I genes can serve as good molecular markers for marker assisted selection (MAS) concerning growth related traits in chicken.

Keywords: Desi chicken, naked neck, morphology, morphometry, production potential, egg traits, egg geometry, IGF-I, growth, PCR- RFLP, chicken

Procedia PDF Downloads 372