Search results for: expensive computations
81 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 17780 The Lived Experiences and Coping Strategies of Women with Attention Deficit and Hyperactivity Disorder (ADHD)
Authors: Oli Sophie Meredith, Jacquelyn Osborne, Sarah Verdon, Jane Frawley
Abstract:
PROJECT OVERVIEW AND BACKGROUND: Over one million Australians are affected by ADHD at an economic and social cost of over $20 billion per annum. Despite health outcomes being significantly worse compared with men, women have historically been overlooked in ADHD diagnosis and treatment. While research suggests physical activity and other non-prescription options can help with ADHD symptoms, the frontline response to ADHD remains expensive stimulant medications that can have adverse side effects. By interviewing women with ADHD, this research will examine women’s self-directed approaches to managing symptoms, including alternatives to prescription medications. It will investigate barriers and affordances to potentially helpful approaches and identify any concerning strategies pursued in lieu of diagnosis. SIGNIFICANCE AND INNOVATION: Despite the economic and societal impact of ADHD on women, research investigating how women manage their symptoms is scant. This project is significant because although women’s ADHD symptoms are markedly different to those of men, mainstream treatment has been based on the experiences of men. Further, it is thought that in developing nuanced coping strategies, women may have masked their symptoms. Thus, this project will highlight strategies which women deem effective in ‘thriving’ rather than just ‘hiding’. By investigating the health service use, self-care and physical activity of women with ADHD, this research aligns with a priority research areas as identified by the November 2023 senate ADHD inquiry report. APPROACH AND METHODS: Semi-structured interviews will be conducted with up to 20 women with ADHD. Interviews will be conducted in person and online to capture experience across rural and metropolitan Australia. Participants will be recruited in partnership with the peak representative body, ADHD Australia. The research will use an intersectional framework, and data will be analysed thematically. This project is led by an interdisciplinary and cross-institutional team of women with ADHD. Reflexive interviewing skills will be employed to help interviewees feel more comfortable disclosing their experiences, especially where they share common ground ENGAGEMENT, IMPACT AND BENEFIT: This research will benefit women with ADHD by increasing knowledge of strategies and alternative treatments to prescription medications, reducing the social and economic burden of ADHD on Australia and on individuals. It will also benefit women by identifying risks involved with some self-directed approaches in lieu of medical advice. The project has an accessible impact plan to directly benefit end-users, which includes the development of a podcast and a PDF resource translating findings. The resources will reach a wide audience through ADHD Australia’s extensive national networks. We will collaborate with Charles Sturt’s Accessibility and Inclusion Division of Safety, Security and Well-being to create a targeted resource for students with ADHD.Keywords: ADHD, women's health, self-directed strategies, health service use, physical activity, public health
Procedia PDF Downloads 7679 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame
Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin
Abstract:
The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.Keywords: FACTS, multi-space-time frame, optimal control, TCSC
Procedia PDF Downloads 26778 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location
Procedia PDF Downloads 32577 Statistical Analysis to Compare between Smart City and Traditional Housing
Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh
Abstract:
Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and designKeywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving
Procedia PDF Downloads 11376 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling
Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva
Abstract:
Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.Keywords: energy saving, inverse problem, heat transfer, multilayer walling
Procedia PDF Downloads 39975 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 1374 The Influence of Age and Education on Patients' Attitudes Towards Contraceptives in Rural California
Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal
Abstract:
Contraceptives are an effective public health achievement, allowing for family planning and reducing the risk of sexually transmitted diseases (STDs). California’s rural Central Valley has high rates of teenage pregnancy and STDs. Factors affecting contraceptive usage here may include religious concerns, financial issues, and regional variations in the accessibility and availability of contraceptives. The increasing population and diversity of the Central Valley make the understanding of the determinants of unintended pregnancy and STDs increasingly nuanced. Patients in California’s Central Valley were surveyed at 6 surgical clinics to assess attitudes toward contraceptives. The questionnaire consisted of demographics and 14 Likert-scale statements investigating patients’ feelings regarding contraceptives. Parametric and non-parametric analysis was performed on the Likert statements. A correlation matrix for the Likert-scale statements was used to evaluate the strength of the relationship between each question. 76 patients aged 18-75 years completed the questionnaire. 90% of the participants were female, 76% Hispanic, 36% married, 44% with an income range between 30-60K, and 83% were between childbearing ages. 60% of participants stated they are currently using or had used some type of contraceptive. 25% of participants had at least one unplanned pregnancy. The most common type of contraceptives used were oral contraceptives(28%) and condoms(38%). The top reasons for patients’ contraceptive usage were: prevention of pregnancy (72%), safe sex/prevention of STDs (32%), and regulation of menstrual cycle (19%). Further analysis of Likert responses revealed that contraception usage increased due to approval of contraceptives (x̄=3.98, σ =1.02); partner approval of contraceptives (x̄=3.875, σ =1.16); and reduced anxiety about pregnancy (x̄=3.875, σ =1.23). Younger females (18-34 years old) agreed more with the statement that the cost of contraceptive supplies is too expensive than older females (35-75 years old), (x̄=3.2, σ = 1.4 vs x̄=2.8, σ =1.3, p<0.05). Younger females (44%) were also more likely to use short-acting contraceptive methods (oral and male condoms) compared to older females (64%) who use long-acting methods (implants/ intrauterine devices). 51% of Hispanic females were using some type of contraceptive. Of those Hispanic females who do not use contraceptives, 33% stated having no children, and all plan to have at least one child in the future. 35% of participants had a bachelor's degree. Those with bachelor’s degrees were more likely to use contraceptives, 58% vs 51%, p<0.05, and less likely to have unplanned pregnancy, 50% vs. 12%, p<0.01. There is increasing use and awareness among patients in rural settings concerning contraceptives. Our finding shows that younger women and women with higher educational attainment tend to have more positive attitudes towards the use of contraceptives. This work gives physicians an understanding of patients’ concerns about contraceptive methods and offers insight into culturally competent intervention programs that respect individual values.Keywords: contraceptives, public health, rural california, women of child baring age
Procedia PDF Downloads 6073 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver
Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar
Abstract:
Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy
Procedia PDF Downloads 20172 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI
Procedia PDF Downloads 17471 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients
Authors: Shreya Saxena
Abstract:
Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery
Procedia PDF Downloads 9870 Soybean Oil Based Phase Change Material for Thermal Energy Storage
Authors: Emre Basturk, Memet Vezir Kahraman
Abstract:
In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing
Procedia PDF Downloads 38469 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 33968 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 19167 The Artificial Intelligence Driven Social Work
Authors: Avi Shrivastava
Abstract:
Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.Keywords: social work, artificial intelligence, AI based social work, machine learning, technology
Procedia PDF Downloads 10466 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 8165 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR
Authors: Ionut Vintu, Stefan Laible, Ruth Schulz
Abstract:
Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection
Procedia PDF Downloads 14064 Review of the Nutritional Value of Spirulina as a Potential Replacement of Fishmeal in Aquafeed
Authors: Onada Olawale Ahmed
Abstract:
As the intensification of aquaculture production increases on global scale, the growing concern of fish farmers around the world is related to cost of fish production, where cost of feeding takes substantial percentage. Fishmeal (FM) is one of the most expensive ingredients, and its high dependence in aqua-feed production translates to high cost of feeding of stocked fish. However, to reach a sustainable aquaculture, new alternative protein sources including cheaper plant or animal origin proteins are needed to be introduced for stable aqua-feed production. Spirulina is a cyanobacterium that has good nutrient profile that could be useful in aquaculture. This review therefore emphasizes on the nutritional value of Spirulina as a potential replacement of FM in aqua-feed. Spirulina is a planktonic photosynthetic filamentous cyanobacterium that forms massive populations in tropical and subtropical bodies of water with high levels of carbonate and bicarbonate. Spirulina grows naturally in nutrient rich alkaline lake with water salinity ( > 30 g/l) and high pH (8.5–11.0). Its artificial production requires luminosity (photo-period 12/12, 4 luxes), temperature (30 °C), inoculum, water stirring device, dissolved solids (10–60 g/litre), pH (8.5– 10.5), good water quality, and macro and micronutrient presence (C, N, P, K, S, Mg, Na, Cl, Ca and Fe, Zn, Cu, Ni, Co, Se). Spirulina has also been reported to grow on agro-industrial waste such as sugar mill waste effluent, poultry industry waste, fertilizer factory waste, and urban waste and organic matter. Chemical composition of Spirulina indicates that it has high nutritional value due to its content of 55-70% protein, 14-19% soluble carbohydrate, high amount of polyunsaturated fatty acids (PUFAs), 1.5–2.0 percent of 5–6 percent total lipid, all the essential minerals are available in spirulina which contributes about 7 percent (average range 2.76–3.00 percent of total weight) under laboratory conditions, β-carotene, B-group vitamin, vitamin E, iron, potassium and chlorophyll are also available in spirulina. Spirulina protein has a balanced composition of amino acids with concentration of methionine, tryptophan and other amino acids almost similar to those of casein, although, this depends upon the culture media used. Positive effects of spirulina on growth, feed utilization and stress and disease resistance of cultured fish have been reported in earlier studies. Spirulina was reported to replace up to 40% of fishmeal protein in tilapia (Oreochromis mossambicus) diet and even higher replacement of fishmeal was possible in common carp (Cyprinus carpio), partial replacement of fish meal with spirulina in diets for parrot fish (Oplegnathus fasciatus) and Tilapia (Orechromis niloticus) has also been conducted. Spirulina have considerable potential for development, especially as a small-scale crop for nutritional enhancement and health improvement of fish. It is important therefore that more research needs to be conducted on its production, inclusion level in aqua-feed and its possible potential use of aquaculture.Keywords: aquaculture, spirulina, fish nutrition, fish feed
Procedia PDF Downloads 52363 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 26062 Development of a Miniature Laboratory Lactic Goat Cheese Model to Study the Expression of Spoilage by Pseudomonas Spp. In Cheeses
Authors: Abirami Baleswaran, Christel Couderc, Loubnah Belahcen, Jean Dayde, Hélène Tormo, Gwénaëlle Jard
Abstract:
Cheeses are often reported to be spoiled by Pseudomonas spp., responsible for defects in appearance, texture, taste, and smell, leading to their non-marketing and even their destruction. Despite preventive actions, problems linked to Pseudomonas spp. are difficult to control by the lack of knowledge and control of these contaminants during the cheese manufacturing. Lactic goat cheese producers are not spared by this problem and are looking for solutions to decrease the number of spoiled cheeses. To explore different hypotheses, experiments are needed. However, cheese-making experiments at the pilot scale are expensive and time consuming. Thus, there is a real need to develop a miniature cheeses model system under controlled conditions. In a previous study, several miniature cheese models corresponding to different type of commercial cheeses have been developed for different purposes. The models were, for example, used to study the influence of milk, starters cultures, pathogen inhibiting additives, enzymatic reactions, microflora, freezing process on cheese. Nevertheless, no miniature model was described on the lactic goat cheese. The aim of this work was to develop a miniature cheese model system under controlled laboratory conditions which resembles commercial lactic goat cheese to study Pseudomonas spp. spoilage during the manufacturing and ripening process. First, a protocol for the preparation of miniature cheeses (3.5 times smaller than a commercial one) was designed based on the cheese factorymanufacturing process. The process was adapted from “Rocamadour” technology and involves maturation of pasteurized milk, coagulation, removal of whey by centrifugation, moulding, and ripening in a little scale cellar. Microbiological (total bacterial count, yeast, molds) and physicochemical (pH, saltinmoisture, moisture in fat-free)analyses were performed on four key stages of the process (before salting, after salting, 1st day of ripening, and end of ripening). Factory and miniature cheeses volatilomewere also obtained after full scan Sift-MS cheese analysis. Then, Pseudomonas spp. strains isolated from contaminated cheeses were selected on their origin, their ability to produce pigments, and their enzymatic activities (proteolytic, lecithinasic, and lipolytic). Factory and miniature curds were inoculated by spotting selected strains on the cheese surface. The expression of cheese spoilage was evaluated by counting the level of Pseudomonas spp. during the ripening and by visual observation and under UVlamp. The physicochemical and microbiological compositions of miniature cheeses permitted to assess that miniature process resembles factory process. As expected, differences involatilomes were observed, probably due to the fact that miniature cheeses are made usingpasteurized milk to better control the microbiological conditions and also because the little format of cheese induced probably a difference during the ripening even if the humidity and temperature in the cellar were quite similar. The spoilage expression of Pseudomonas spp. was observed in miniature and factory cheeses. It confirms that the proposed model is suitable for the preparation of miniature cheese specimens in the spoilage study of Pseudomonas spp. in lactic cheeses. This kind of model could be deployed for other applications and other type of cheese.Keywords: cheese, miniature, model, pseudomonas spp, spoilage
Procedia PDF Downloads 13461 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 18360 Nanoporous Activated Carbons for Fuel Cells and Supercapacitors
Authors: A. Volperts, G. Dobele, A. Zhurinsh, I. Kruusenberg, A. Plavniece, J. Locs
Abstract:
Nowadays energy consumption constantly increases and development of effective and cheap electrochemical sources of power, such as fuel cells and electrochemical capacitors, is topical. Due to their high specific power, charge and discharge rates, working lifetime supercapacitor based energy accumulation systems are more and more extensively being used in mobile and stationary devices. Lignocellulosic materials are widely used as precursors and account for around 45% of the total raw materials used for the manufacture of activated carbon which is the most suitable material for supercapacitors. First part of our research is devoted to study of influence of main stages of wood thermochemical activation parameters on activated carbons porous structure formation. It was found that the main factors governing the properties of carbon materials are specific surface area, volume and pore size distribution, particles dispersity, ash content and oxygen containing groups content. Influence of activated carbons attributes on capacitance and working properties of supercapacitor are demonstrated. The correlation between activated carbons porous structure indices and electrochemical specifications of supercapacitors with electrodes made from these materials has been determined. It is shown that if synthesized activated carbons are used in supercapacitors then high specific capacitances can be reached – more than 380 F/g in 4.9M sulfuric acid based electrolytes and more than 170 F/g in 1 M tetraethylammonium tetrafluoroborate in acetonitrile electrolyte. Power specifications and minimal price of H₂-O₂ fuel cells are limited by the expensive platinum-based catalysts. The main direction in development of non-platinum catalysts for the oxygen reduction is the study of cheap porous carbonaceous materials which can be obtained by the pyrolysis of polymers including renewable biomass. It is known that nitrogen atoms in carbon materials to a high degree determine properties of the doped activated carbons, such as high electrochemical stability, hardness, electric resistance, etc. The lack of sufficient knowledge on the doping of the carbon materials calls for the ongoing researches of properties and structure of modified carbon matrix. In the second part of this study, highly porous activated carbons were synthesized using alkali thermochemical activation from wood, cellulose and cellulose production residues – craft lignin and sewage sludge. Activated carbon samples were doped with dicyandiamide and melamine for the application as fuel cell cathodes. Conditions of nitrogen introduction (solvent, treatment temperature) and its content in the carbonaceous material, as well as porous structure characteristics, such as specific surface and pore size distribution, were studied. It was found that efficiency of doping reaction depends on the elemental oxygen content in the activated carbon. Relationships between nitrogen content, porous structure characteristics and electrodes electrochemical properties are demonstrated.Keywords: activated carbons, low-temperature fuel cells, nitrogen doping, porous structure, supercapacitors
Procedia PDF Downloads 12159 Strength Properties of Ca-Based Alkali Activated Fly Ash System
Authors: Jung-Il Suh, Hong-Gun Park, Jae-Eun Oh
Abstract:
Recently, the use of long-span precast concrete (PC) construction has increased in modular construction such as storage buildings and parking facilities. When applying long span PC member, reducing weight of long span PC member should be conducted considering lifting capacity of crane and self-weight of PC member and use of structural lightweight concrete made by lightweight aggregate (LWA) can be considered. In the process of lightweight concrete production, segregation and bleeding could occur due to difference of specific gravity between cement (3.3) and lightweight aggregate (1.2~1.8) and reducing weight of binder is needed to prevent the segregation between binder and aggregate. Also, lightweight precast concrete made by cementitious materials such as fly ash and ground granulated blast furnace (GGBFS) which is lower than specific gravity of cement as a substitute for cement has been studied. When only using fly ash for cementless binder alkali-activation of fly ash is most important chemical process in which the original fly ash is dissolved by a strong alkaline medium in steam curing with high-temperature condition. Because curing condition is similar with environment of precast member production, additional process is not needed. Na-based chloride generally used as a strong alkali activator has a practical problem such as high pH toxicity and high manufacturing cost. Instead of Na-based alkali activator calcium hydroxide [Ca(OH)2] and sodium hydroxide [Na2CO3] might be used because it has a lower pH and less expensive than Na-based alkali activator. This study explored the influences on Ca(OH)2-Na2CO3-activated fly ash system in its microstructural aspects and strength and permeability using powder X-ray analysis (XRD), thermogravimetry (TGA), mercury intrusion porosimetry (MIP). On the basis of microstructural analysis, the conclusions are made as follows. Increase of Ca(OH)2/FA wt.% did not affect improvement of compressive strength. Also, Ca(OH)2/FA wt.% and Na2CO3/FA wt.% had little effect on specific gravity of saturated surface dry (SSD) and absolute dry (AD) condition to calculate water absorption. Especially, the binder is appropriate for structural lightweight concrete because specific gravity of the hardened paste has no difference with that of lightweight aggregate. The XRD and TGA/DTG results did not present considerable difference for the types and quantities of hydration products depending on w/b ratio, Ca(OH)2 wt.%, and Na2CO3 wt.%. In the case of higher molar quantity of Ca(OH)2 to Na2CO3, XRD peak indicated unreacted Ca(OH)2 while DTG peak was not presented because of small quantity. Thus, presence of unreacted Ca(OH)2 is too small quantity to effect on mechanical performance. As a result of MIP, the porosity volume related to capillary pore depends on the w/b ratio. In the same condition of w/b ratio, quantities of Ca(OH)2 and Na2CO3 have more influence on pore size distribution rather than total porosity. While average pore size decreased as Na2CO3/FA w.t% increased, the average pore size increased over 20 nm as Ca(OH)2/FA wt.% increased which has inverse proportional relationship between pore size and mechanical properties such as compressive strength and water permeability.Keywords: Ca(OH)2, compressive strength, microstructure, fly ash, Na2CO3, water absorption
Procedia PDF Downloads 22658 Construction Port Requirements for Floating Wind Turbines
Authors: Alan Crowle, Philpp Thies
Abstract:
As the floating offshore wind turbine industry continues to develop and grow, the capabilities of established port facilities need to be assessed as to their ability to support the expanding construction and installation requirements. This paper assesses current infrastructure requirements and projected changes to port facilities that may be required to support the floating offshore wind industry. Understanding the infrastructure needs of the floating offshore renewable industry will help to identify the port-related requirements. Floating Offshore Wind Turbines can be installed further out to sea and in deeper waters than traditional fixed offshore wind arrays, meaning that it can take advantage of stronger winds. Separate ports are required for substructure construction, fit-out of the turbines, moorings, subsea cables and maintenance. Large areas are required for the laydown of mooring equipment; inter-array cables, turbine blades and nacelles. The capabilities of established port facilities to support floating wind farms are assessed by evaluation of the size of substructures, the height of wind turbine with regards to the cranes for fitting of blades, distance to offshore site and offshore installation vessel characteristics. The paper will discuss the advantages and disadvantages of using large land-based cranes, inshore floating crane vessels or offshore crane vessels at the fit-out port for the installation of the turbine. Water depths requirements for import of materials and export of the completed structures will be considered. There are additional costs associated with any emerging technology. However part of the popularity of Floating Offshore Wind Turbines stems from the cost savings against permanent structures like fixed wind turbines. Floating Offshore Wind Turbine developers can benefit from lighter, more cost-effective equipment which can be assembled in port and towed to the site rather than relying on large, expensive installation vessels to transport and erect fixed bottom turbines. The ability to assemble Floating Offshore Wind Turbines equipment onshore means minimizing highly weather-dependent operations like offshore heavy lifts and assembly, saving time and costs and reducing safety risks for offshore workers. Maintenance might take place in safer onshore conditions for barges and semi-submersibles. Offshore renewables, such as floating wind, can take advantage of this wealth of experience, while oil and gas operators can deploy this experience at the same time as entering the renewables space The floating offshore wind industry is in the early stages of development and port facilities are required for substructure fabrication, turbine manufacture, turbine construction and maintenance support. The paper discusses the potential floating wind substructures as this provides a snapshot of the requirements at the present time, and potential technological developments required for commercial development. Scaling effects of demonstration-scale projects will be addressed, however, the primary focus will be on commercial-scale (30+ units) device floating wind energy farms.Keywords: floating wind, port, marine construction, offshore renewables
Procedia PDF Downloads 29457 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification
Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado
Abstract:
DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles
Procedia PDF Downloads 24856 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 7355 Characterization of Potato Starch/Guar Gum Composite Film Modified by Ecofriendly Cross-Linkers
Authors: Sujosh Nandi, Proshanta Guha
Abstract:
Synthetic plastics are preferred for food packaging due to high strength, stretch-ability, good water vapor and gas barrier properties, transparency and low cost. However, environmental pollution generated by these synthetic plastics is a major concern of modern human civilization. Therefore, use of biodegradable polymers as a substitute for synthetic non-biodegradable polymers are encouraged to be used even after considering drawbacks related to mechanical and barrier properties of the films. Starch is considered one of the potential raw material for the biodegradable polymer, encounters poor water barrier property and mechanical properties due to its hydrophilic nature. That apart, recrystallization of starch molecules occurs during aging which decreases flexibility and increases elastic modulus of the film. The recrystallization process can be minimized by blending of other hydrocolloids having similar structural compatibility, into the starch matrix. Therefore, incorporation of guar gum having a similar structural backbone, into the starch matrix can introduce a potential film into the realm of biodegradable polymer. However, hydrophilic nature of both starch and guar gum, water barrier property of the film is low. One of the prospective solution to enhance this could be modification of the potato starch/guar gum (PSGG) composite film using cross-linker. Over the years, several cross-linking agents such as phosphorus oxychloride, sodium trimetaphosphate, etc. have been used to improve water vapor permeability (WVP) of the films. However, these chemical cross-linking agents are toxic, expensive and take longer time to degrade. Therefore, naturally available carboxylic acid (tartaric acid, malonic acid, succinic acid, etc.) had been used as a cross-linker and found that water barrier property enhanced substantially. As per our knowledge, no works have been reported with tartaric acid and succinic acid as a cross-linking agent blended with the PSGG films. Therefore, the objective of the present study was to examine the changes in water vapor barrier property and mechanical properties of the PSGG films after cross-linked with tartaric acid (TA) and succinic acid (SA). The cross-linkers were blended with PSGG film-forming solution at four different concentrations (4, 8, 12 & 16%) and cast on teflon plate at 37°C for 20 h. From the fourier-transform infrared spectroscopy (FTIR) study of the developed films, a band at 1720cm-1 was observed which is attributed to the formation of ester group in the developed films. On the other hand, it was observed that tensile strength (TS) of the cross-linked film decreased compared to non-cross linked films, whereas strain at break increased by several folds. Moreover, the results depicted that tensile strength diminished with increasing the concentration of TA or SA and lowest TS (1.62 MPa) was observed for 16% SA. That apart, maximum strain at break was also observed for TA at 16% and the reason behind this could be a lesser degree of crystallinity of the TA cross-linked films compared to SA. However, water vapor permeability of succinic acid cross-linked film was reduced significantly, but it was enhanced significantly by addition of tartaric acid.Keywords: cross linking agent, guar gum, organic acids, potato starch
Procedia PDF Downloads 11554 Renegotiating the Filipino Bakla Culture: A Semiotic Analysis of Drag Performance in Eat Bulaga’s Kalye Serye
Authors: Ruepert Jiel Cao
Abstract:
This study explores the renegotiation of bakla culture in Philippine media in the context of Kalye Serye segment of the popular Filipino noontime variety show Eat Bulaga. Although the term “bakla” is usually translated to “gay” or “homosexual male” in English, they do not mean the same. The western notion of a gay refers to a male person attracted to another male person but still retains the masculine physical attributes. However, the bakla embodies loudness, femininity, and transvestitism. Hence, a bakla is a gay man aspiring to be a woman by assuming feminine actions and appearance, a definition much closer to a transgender. The Philippine media usually employs the bakla culture in comedy programs. The bakla nowadays is usually associated with the people of lower economic strata and carries a pathological connotation. The Filipino television program Eat Bulaga, which has been airing for more than 36 years, is fond of using bakla in comedy. However, the recently launched segment entitled Kalye Serye (literally “Street [Television] Series”), while still employing drag performance to incorporate bakla culture in comedy, renegotiates the bakla culture by deviating from the stereotypical notion of bakla. In this study, this researcher asks: (1) How does Kalye Serye renegotiate the Filipino concept of bakla in terms of economic aspirations and social norms? (2) How does Kalye Serye reappropriate the bakla culture to fit non-comedic performances? The study examines 15 purposively selected Kalye Serye episodes. Seven were selected from the Thursday episodes, seven from Saturday episodes, and the Lenten special episode. These were selected to cover as many characters and different character roles as possible. Data was constructed by identifying and coding the roles, physical appearance and gestures, and key dialogs of the characters. A total of six female characters played by three different male actors were examined. Semiotic analysis using semiotics of Roland Barthes was performed to produce a reading of the characters. Findings show that through physical appearance, the characters associate bakla with the economic affluence through the use of expensive-looking clothes, jewelries, cars, and elaborate gestures. This represents a new economic but old western aspiration of the bakla. In terms of social norms, the characters try to revive the traditional concepts of femininity, courtship, and respect, values which are touted to be lost in the current generation of Filipinos. This is quite ironic because while there is a seemingly tolerant attitude towards all forms of queerness, the bakla is considered immoral and yet, the bakla is used to teach about morality and values. Finally, the characters break the traditional association of the bakla with slapstick comedy and their roles are reappropriated to suit dramatic roles. By refraining from portraying the bakla in ridiculous manner (physically and in terms of roles), the bakla lends itself well in the performance of dramatic roles and their ridiculous and pathological associations removed. Future research may include other Filipino or Asian portrayals of queerness to get a better understanding of how queerness is incorporated in contemporary popular culture.Keywords: bakla, drag performance, popular culture, queer representation
Procedia PDF Downloads 30253 Controlled Nano Texturing in Silicon Wafer for Excellent Optical and Photovoltaic Properties
Authors: Deb Kumar Shah, M. Shaheer Akhtar, Ha Ryeon Lee, O-Bong Yang, Chong Yeal Kim
Abstract:
The crystalline silicon (Si) solar cells are highly renowned photovoltaic technology and well-established as the commercial solar technology. Most of the solar panels are globally installed with the crystalline Si solar modules. At the present scenario, the major photovoltaic (PV) market is shared by c-Si solar cells, but the cost of c-Si panels are still very high as compared with the other PV technology. In order to reduce the cost of Si solar panels, few necessary steps such as low-cost Si manufacturing, cheap antireflection coating materials, inexpensive solar panel manufacturing are to be considered. It is known that the antireflection (AR) layer in c-Si solar cell is an important component to reduce Fresnel reflection for improving the overall conversion efficiency. Generally, Si wafer exhibits the 30% reflection because it normally poses the two major intrinsic drawbacks such as; the spectral mismatch loss and the high Fresnel reflection loss due to the high contrast of refractive indices between air and silicon wafer. In recent years, researchers and scientists are highly devoted to a lot of researches in the field of searching effective and low-cost AR materials. Silicon nitride (SiNx) is well-known AR materials in commercial c-Si solar cells due to its good deposition and interaction with passivated Si surfaces. However, the deposition of SiNx AR is usually performed by expensive plasma enhanced chemical vapor deposition (PECVD) process which could have several demerits like difficult handling and damaging the Si substrate by plasma when secondary electrons collide with the wafer surface for AR coating. It is very important to explore new, low cost and effective AR deposition process to cut the manufacturing cost of c-Si solar cells. One can also be realized that a nano-texturing process like the growth of nanowires, nanorods, nanopyramids, nanopillars, etc. on Si wafer can provide a low reflection on the surface of Si wafer based solar cells. The above nanostructures might be enhanced the antireflection property which provides the larger surface area and effective light trapping. In this work, we report on the development of crystalline Si solar cells without using the AR layer. The Silicon wafer was modified by growing nanowires like Si nanostructures using the wet controlled etching method and directly used for the fabrication of Si solar cell without AR. The nanostructures over Si wafer were optimized in terms of sizes, lengths, and densities by changing the etching conditions. Well-defined and aligned wires like structures were achieved when the etching time is 20 to 30 min. The prepared Si nanostructured displayed the minimum reflectance ~1.64% at 850 nm with the average reflectance of ~2.25% in the wavelength range from 400-1000 nm. The nanostructured Si wafer based solar cells achieved the comparable power conversion efficiency in comparison with c-Si solar cells with SiNx AR layer. From this study, it is confirmed that the reported method (controlled wet etching) is an easy, facile method for preparation of nanostructured like wires on Si wafer with low reflectance in the whole visible region, which has greater prospects in developing c-Si solar cells without AR layer at low cost.Keywords: chemical etching, conversion efficiency, silicon nanostructures, silicon solar cells, surface modification
Procedia PDF Downloads 12552 Enhanced Functional Production of a Crucial Biomolecule Human Serum Albumin in Escherichia coli
Authors: Ashima Sharma
Abstract:
Human Serum Albumin (HSA)- one of the most demanded therapeutic proteins with immense biotechnological applications- is a large multidomain protein containing 17 disulfide bonds. The current source of HSA is human blood plasma which is a limited and unsafe source. Thus, there exists an indispensable need to promote non-animal derived recombinant HSA (rHSA) production. Escherichia coli is one of the most convenient hosts which had contributed to the production of more than 30% of the FDA approved recombinant pharmaceuticals. It grows rapidly and reaches high cell density using inexpensive and simple substrates. E. coli derived recombinant products have more economic potential as fermentation processes are cheaper compared to the other expression hosts. The major bottleneck in exploiting E. coli as a host for a disulfide-rich multidomain protein is the formation of aggregates of overexpressed protein. The majority of the expressed HSA forms inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA from inclusion bodies is not preferred because it is difficult to obtain a large multidomain disulfide bond rich protein like rHSA in its functional native form. Purification is tedious, time-consuming, laborious and expensive. Because of such limitations, the E. coli host system was neglected for rHSA production for the past few decades despite its numerous advantages. In the present work, we have exploited the capabilities of E. coli as a host for the enhanced functional production of rHSA (~60% of the total expressed rHSA in the soluble fraction). Parameters like intracellular environment, temperature, induction type, duration of induction, cell lysis conditions etc. which play an important role in enhancing the level of production of the desired protein in its native form in vivo have been optimized. We have studied the effect of assistance of different types of exogenously employed chaperone systems on the functional expression of rHSA in the E. coli host system. Different aspects of cell growth parameters during the production of rHSA in presence and absence of molecular chaperones in E. coli have also been studied. Upon overcoming the difficulties to produce functional rHSA in E. coli, it has been possible to produce significant levels of functional protein through engineering the biological system of protein folding in the cell, the E. coli-derived rHSA has been purified to homogeneity. Its detailed physicochemical characterization has been performed by monitoring its conformational properties, secondary and tertiary structure elements, surface properties, ligand binding properties, stability issues etc. These parameters of the recombinant protein have been compared with the naturally occurring protein from the human source. The outcome of the comparison reveals that the recombinant protein resembles exactly the same as the natural one. Hence, we propose that the E. coli-derived rHSA is an ideal biosimilar for human blood plasma-derived serum albumin. Therefore, in the present study, we have introduced and promoted the E. coli- derived rHSA as an alternative to the preparation from a human source, pHSA.Keywords: recombinant human serum albumin, Escherichia coli, biosimilar, chaperone assisted protein folding
Procedia PDF Downloads 210