Search results for: inverse distance weighted method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20893

Search results for: inverse distance weighted method

13123 Important Factors Affecting the Effectiveness of Quality Control Circles

Authors: Sogol Zarafshan

Abstract:

The present study aimed to identify important factors affecting the effectiveness of quality control circles in a hospital, as well as rank them using a combination of fuzzy VIKOR and Grey Relational Analysis (GRA). The study population consisted of five academic members and five experts in the field of nursing working in a hospital, who were selected using a purposive sampling method. Also, a sample of 107 nurses was selected through a simple random sampling method using their employee codes and the random-number table. The required data were collected using a researcher-made questionnaire which consisted of 12 factors. The validity of this questionnaire was confirmed through giving the opinions of experts and academic members who participated in the present study, as well as performing confirmatory factor analysis. Its reliability also was verified (α=0.796). The collected data were analyzed using SPSS 22.0 and LISREL 8.8, as well as VIKOR–GRA and IPA methods. The results of ranking the factors affecting the effectiveness of quality control circles showed that the highest and lowest ranks were related to ‘Managers’ and supervisors’ support’ and ‘Group leadership’. Also, the highest hospital performance was for factors such as ‘Clear goals and objectives’ and ‘Group cohesiveness and homogeneity’, and the lowest for ‘Reward system’ and ‘Feedback system’, respectively. The results showed that although ‘Training the members’, ‘Using the right tools’ and ‘Reward system’ were factors that were of great importance, the organization’s performance for these factors was poor. Therefore, these factors should be paid more attention by the studied hospital managers and should be improved as soon as possible.

Keywords: Quality control circles, Fuzzy VIKOR, Grey Relational Analysis, Importance–Performance Analysis

Procedia PDF Downloads 133
13122 Delamination Fracture Toughness Benefits of Inter-Woven Plies in Composite Laminates Produced through Automated Fibre Placement

Authors: Jayden Levy, Garth M. K. Pearce

Abstract:

An automated fibre placement method has been developed to build through-thickness reinforcement into carbon fibre reinforced plastic laminates during their production, with the goal of increasing delamination fracture toughness while circumventing the additional costs and defects imposed by post-layup stitching and z-pinning. Termed ‘inter-weaving’, the method uses custom placement sequences of thermoset prepreg tows to distribute regular fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness was evaluated experimentally through double cantilever beam tests (ASTM standard D5528-13) on [±15°]9 laminates made from Park Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape. Unwoven and inter-woven automated fibre placement samples were compared to those of traditional laminates produced from standard uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to suffer a mostly constant 3.5% decrease in mode I delamination fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases (up to 50%), though these were offset by a matching overall reduction. These positive and negative behaviours of inter-woven laminates were respectively found to be caused by fibre breakage and matrix deformation at inter-weave sites, and the 3D layering of inter-woven ply interfaces providing numerous paths of least resistance for crack propagation.

Keywords: AFP, automated fibre placement, delamination, fracture toughness, inter-weaving

Procedia PDF Downloads 181
13121 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 450
13120 Optimum Dewatering Network Design Using Firefly Optimization Algorithm

Authors: S. M. Javad Davoodi, Mojtaba Shourian

Abstract:

Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.

Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm

Procedia PDF Downloads 292
13119 Some Extreme Halophilic Microorganisms Produce Extracellular Proteases with Long Lasting Tolerance to Ethanol Exposition

Authors: Cynthia G. Esquerre, Amparo Iris Zavaleta

Abstract:

Extremophiles constitute a potentially valuable source of proteases for the development of biotechnological processes; however, the number of available studies in the literature is limited compared to mesophilic counterparts. Therefore, in this study, Peruvian halophilic microorganisms were characterized to select suitable proteolytic strains that produce active proteases under exigent conditions. Proteolysis was screened using the streak plate method with gelatin or skim milk as substrates. After that, proteolytic microorganisms were selected for phenotypic characterization and screened by a semi-quantitative proteolytic test using a modified method of diffusion agar. Finally, proteolysis was evaluated using partially purified extracts by ice-cold ethanol precipitation and dialysis. All analyses were carried out over a wide range of NaCl concentrations, pH, temperature and substrates. Of a total of 60 strains, 21 proteolytic strains were selected, of these 19 were extreme halophiles and 2 were moderates. Most proteolytic strains demonstrated differences in their biochemical patterns, particularly in sugar fermentation. A total of 14 microorganisms produced extracellular proteases, 13 were neutral, and one was alkaline showing activity up to pH 9.0. Proteases hydrolyzed gelatin as the most specific substrate. In general, catalytic activity was efficient under a wide range of NaCl (1 to 4 M NaCl), temperature (37 to 55 °C) and after an ethanol exposition performed at -20 °C for 24 hours. In conclusion, this study reported 14 candidates extremely halophiles producing extracellular proteases capable of being stable and active on a wide range of NaCl, temperature and even long lasting ethanol exposition.

Keywords: biotechnological processes, ethanol exposition, extracellular proteases, extremophiles

Procedia PDF Downloads 280
13118 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 223
13117 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 272
13116 Research of Seepage Field and Slope Stability Considering Heterogeneous Characteristics of Waste Piles: A Less Costly Way to Reduce High Leachate Levels and Avoid Accidents

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Due to the characteristics of high-heap and large-volume, the complex layers of waste and the high-water level of leachate, environmental pollution, and slope instability are easily produced. It is therefore of great significance to research the heterogeneous seepage field and stability of landfills. This paper focuses on the heterogeneous characteristics of the landfill piles and analyzes the seepage field and slope stability of the landfill using statistical and numerical analysis methods. The calculated results are compared with the field measurement and literature research data to verify the reliability of the model, which may provide the basis for the design, safe, and eco-friendly operation of the landfill. The main innovations are as follows: (1) The saturated-unsaturated seepage equation of heterogeneous soil is derived theoretically. The heterogeneous landfill is regarded as composed of infinite layers of homogeneous waste, and a method for establishing the heterogeneous seepage model is proposed. Then the formation law of the stagnant water level of heterogeneous landfills is studied. It is found that the maximum stagnant water level of landfills is higher when considering the heterogeneous seepage characteristics, which harms the stability of landfills. (2) Considering the heterogeneity weight and strength characteristics of waste, a method of establishing a heterogeneous stability model is proposed, and it is extended to the three-dimensional stability study. It is found that the distribution of heterogeneous characteristics has a great influence on the stability of landfill slope. During the operation and management of the landfill, the reservoir bank should also be considered while considering the capacity of the landfill.

Keywords: heterogeneous characteristics, leachate levels, saturated-unsaturated seepage, seepage field, slope stability

Procedia PDF Downloads 241
13115 The Application of Raman Spectroscopy in Olive Oil Analysis

Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli

Abstract:

Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.

Keywords: authentication, chemometrics, olive oil, raman spectroscopy

Procedia PDF Downloads 328
13114 Reaching a Mobile and Dynamic Nose after Rhinoplasty: A Pilot Study

Authors: Guncel Ozturk

Abstract:

Background: Rhinoplasty is the most commonly performed cosmetic operations in plastic surgery. Maneuvers used in rhinoplasty lead to a firm and stiff nasal tip in the early postoperative months. This unnatural stability of the nose may easily cause distortion in the reshaped nose after severe trauma. Moreover, a firm nasal tip may cause difficulties in performing activities such as touching, hugging, or kissing. Decreasing the stability and increasing the mobility of the nasal tip would help rhinoplasty patients to avoid these small but relatively important problems. Methods: We use delivery approach with closed rhinoplasty and changed positions of intranasal incisions to reach a dynamic and mobile nose. A total of 203 patients who had undergone primary closed rhinoplasty in private practice were inspected retrospectively. Posterior strut flap that was connected with connective tissues in the caudal of septum and the medial crurals were formed. Cartilage of the posterior strut graft was left 2 mm thick in the distal part of septum, it was cut vertically, and the connective tissue in the distal part was preserved. Results: The median patient age was 24 (range 17-42) years. The median follow-up period was15.2 (range12-26) months. Patient satisfaction was assessed with the 'Rhinoplasty Outcome Evaluation' (ROE) questionnaire. Twelve months after surgeries, 87.5% of patients reported excellent outcomes, according to ROE. Conclusion: The soft tissue connections between that segment and surrounding structures should be preserved to save the support of the tip while having a mobile tip at the same time with this method. These modifications would access to a mobile, non-stiff, and dynamic nasal tip in the early postoperative months. Further and prospective studies should be performed for supporting this method.

Keywords: closed rhinoplasty, dynamic, mobile, tip

Procedia PDF Downloads 129
13113 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 439
13112 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 84
13111 Spray Drying: An Innovative and Sustainable Method of Preserving Fruits

Authors: Adepoju Abiola Lydia, Adeyanju James Abiodun, Abioye A. O.

Abstract:

Spray drying, an innovative and sustainable preservation method, is increasingly gaining recognition for its potential to enhance food security by extending the shelf life of fruits. This technique involves the atomization of fruit pulp into fine droplets, followed by rapid drying with hot air, resulting in a powdered product that retains much of the original fruit's nutritional value, flavor, and color. By encapsulating sensitive bioactive compounds within a dry matrix, spray drying mitigates nutrient degradation and extends product usability. This technology aligns with sustainability goals by reducing post-harvest losses, minimizing the need for preservatives, and lowering energy consumption compared to conventional drying methods. Furthermore, spray drying enables the use of imperfect or surplus fruits, contributing to waste reduction and providing a continuous supply of nutritious fruit-based ingredients regardless of seasonal variations. The powdered form enhances versatility, allowing incorporation into various food products, thus broadening the scope of fruit utilization. Innovations in spray drying, such as the use of novel carrier agents and optimization of processing parameters, enhance the quality and functionality of the final product. Moreover, the scalability of spray drying makes it suitable for both industrial applications and smaller-scale operations, supporting local economies and food systems. In conclusion, spray drying stands out as a key technology in enhancing food security by ensuring a stable supply of high-quality, nutritious food ingredients while fostering sustainable agricultural practices.

Keywords: spray drying, sustainable, process parameters, carrier agents, fruits

Procedia PDF Downloads 9
13110 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 90
13109 Telemedicine in Physician Assistant Education: A Partnership with Community Agency

Authors: Martina I. Reinhold, Theresa Bacon-Baguley

Abstract:

A core challenge of physician assistant education is preparing professionals for lifelong learning. While this conventionally has encompassed scientific advances, students must also embrace new care delivery models and technologies. Telemedicine, the provision of care via two-way audio and video, is an example of a technological advance reforming health care. During a three-semester sequence of Hospital Community Experiences, physician assistant students were assigned experiences with Answer Health on Demand, a telemedicine collaborative. Preceding the experiences, the agency lectured on the application of telemedicine. Students were then introduced to the technology and partnered with a provider. Prior to observing the patient-provider interaction, patient consent was obtained. Afterwards, students completed a reflection paper on lessons learned and the potential impact of telemedicine on their careers. Thematic analysis was completed on the students’ reflection papers (n=13). Preceding the lecture and experience, over 75% of students (10/13) were unaware of telemedicine. Several stated they were 'skeptical' about the effectiveness of 'impersonal' health care appointments. After the experience, all students remarked that telemedicine will play a large role in the future of healthcare and will provide benefits by improving access in rural areas, decreasing wait time, and saving cost. More importantly, 30% of students (4/13) commented that telemedicine is a technology they can see themselves using in their future practice. Initial results indicate that collaborative interaction between students and telemedicine providers enhanced student learning and exposed students to technological advances in the delivery of care. Further, results indicate that students perceived telemedicine more favorably as a viable delivery method after the experience.

Keywords: collaboration, physician assistant education, teaching innovative health care delivery method, telemedicine

Procedia PDF Downloads 193
13108 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints

Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu

Abstract:

Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.

Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning

Procedia PDF Downloads 44
13107 Effects of Ultraviolet Treatment on Microbiological Load and Phenolic Content of Vegetable Juice

Authors: Kubra Dogan, Fatih Tornuk

Abstract:

Due to increasing consumer demand for the high-quality food products and awareness regarding the health benefits of different nutrients in food minimal processing becomes more popular in modern food preservation. To date, heat treatment is often used for inactivation of spoilage microorganisms in foods. However, it may cause significant changes in the quality and nutritional properties of food. In order to overcome the detrimental effects of heat treatment, several alternatives of non-thermal microbial inactivation processes have been investigated. Ultraviolet (UV) inactivation is a promising and feasible method for better quality and longer shelf life as an alternative to heat treatment, which aims to inhibit spoilage and pathogenic microorganisms and to inactivate the enzymes in vegetable juice production. UV-C is a sub-class of UV treatment which shows the highest microcidal effect between 250-270 nm. The wavelength of 254 nm is used for the surface disinfection of certain liquid food products such as vegetable juice. Effects of UV-C treatment on microbiological load and quality parameter of vegetable juice which is a mix of celery, carrot, lemon and orange was investigated. Our results showed that storing of UV-C applied vegetable juice for three months, reduced the count of TMAB by 3.5 log cfu/g and yeast-mold by 2 log cfu/g compared to control sample. Total phenolic content was found to be 514.3 ± 0.6 mg gallic acid equivalent/L, and there wasn’t a significant difference compared to control. The present work suggests that UV-C treatment is an alternative method for disinfection of vegetable juice since it enables adequate microbial inactivation, longer shelf life and has minimal effect on degradation of quality parameters of vegetable juice.

Keywords: heat treatment, phenolic content, shelf life, ultraviolet (UV-C), vegetable juice

Procedia PDF Downloads 205
13106 The Design Method of Artificial Intelligence Learning Picture: A Case Study of DCAI's New Teaching

Authors: Weichen Chang

Abstract:

To create a guided teaching method for AI generative drawing design, this paper develops a set of teaching models for AI generative drawing (DCAI), which combines learning modes such as problem-solving, thematic inquiry, phenomenon-based, task-oriented, and DFC . Through the information security AI picture book learning guided programs and content, the application of participatory action research (PAR) and interview methods to explore the dual knowledge of Context and ChatGPT (DCAI) for AI to guide the development of students' AI learning skills. In the interviews, the students highlighted five main learning outcomes (self-study, critical thinking, knowledge generation, cognitive development, and presentation of work) as well as the challenges of implementing the model. Through the use of DCAI, students will enhance their consensus awareness of generative mapping analysis and group cooperation, and they will have knowledge that can enhance AI capabilities in DCAI inquiry and future life. From this paper, it is found that the conclusions are (1) The good use of DCAI can assist students in exploring the value of their knowledge through the power of stories and finding the meaning of knowledge communication; (2) Analyze the transformation power of the integrity and coherence of the story through the context so as to achieve the tension of ‘starting and ending’; (3) Use ChatGPT to extract inspiration, arrange story compositions, and make prompts that can communicate with people and convey emotions. Therefore, new knowledge construction methods will be one of the effective methods for AI learning in the face of artificial intelligence, providing new thinking and new expressions for interdisciplinary design and design education practice.

Keywords: artificial intelligence, task-oriented, contextualization, design education

Procedia PDF Downloads 26
13105 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 200
13104 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 69
13103 Agrowastes to Edible Hydrogels through Bio Nanotechnology Interventions: Bioactive from Mandarin Peels

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits contain an abundance of phytochemicals that can promote health. A substantial amount of agrowaste is produced from the juice processing industries, primarily peels and seeds. This leftover agrowaste is a reservoir of nutraceuticals, particularly bioflavonoids which render it antioxidant and potentially anticancerous. It is, therefore, favorable to utilize this biomass and contribute towards sustainability in a manner that value-added products may be derived from them, nutraceuticals, in this study. However, the pre-systemic metabolism of flavonoids in the gastric phase limits the effectiveness of these bioflavonoids derived from mandarin biomass. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was explored for its flavonoid profile. This work entails supercritical fluid extraction and identification of bioflavonoids from mandarin biomass. Furthermore, to overcome the limitations of these flavonoids in the gastrointestinal tract, a double-layered vehicular mechanism comprising the fabrication of nanoconjugates and edible hydrogels was adopted. Total flavonoids in the mandarin peel extract were estimated by the aluminum chloride complexation method and were found to be 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the abundance of polymethoxyflavones (PMFs), nobiletin and tangeretin as the major flavonoids in the extract, followed by hesperetin and naringenin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which showed an IC50 of 0.55μg/ml. Nanoconjugates were fabricated via the solvent evaporation method, which was further impregnated into hydrogels. Additionally, the release characteristics of nanoconjugate-laden hydrogels in a simulated gastrointestinal environment were studied. The PLGA-PMFs nanoconjugates exhibited a particle size between 200-250nm having a smooth and spherical shape as revealed by FE-SEM. The impregnated alginate hydrogels offered a dense network that ensured the holding of PLGA-PMF nanoconjugates, as confirmed by Cryo-SEM images. Rheological studies revealed the shear-thinning behavior of hydrogels and their high resistance to deformation. Gastrointestinal studies showed a negligible 4.0% release of flavonoids in the gastric phase, followed by a sustained release over the next hours in the intestinal environment. Therefore, based on the enormous potential of recovering nutraceuticals from agro-processing wastes, further augmented by nanotechnological interventions for enhancing the bioefficacy of these compounds, lays the foundation for exploring the path towards the development of value-added products, thereby contributing towards the sustainable use of agrowaste.

Keywords: agrowaste, gastrointestinal, hydrogel, nutraceuticals

Procedia PDF Downloads 90
13102 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 361
13101 Bulk Transport in Strongly Correlated Topological Insulator Samarium Hexaboride Using Hall Effect and Inverted Resistance Methods

Authors: Alexa Rakoski, Yun Suk Eo, Cagliyan Kurdak, Priscila F. S. Rosa, Zachary Fisk, Monica Ciomaga Hatnean, Geetha Balakrishnan, Boyoun Kang, Myungsuk Song, Byungki Cho

Abstract:

Samarium hexaboride (SmB6) is a strongly correlated mixed valence material and Kondo insulator. In the resistance-temperature curve, SmB6 exhibits activated behavior from 4-40 K after the Kondo gap forms. However, below 4 K, the resistivity is temperature independent or weakly temperature dependent due to the appearance of a topologically protected surface state. Current research suggests that the surface of SmB6 is conductive while the bulk is truly insulating, different from conventional 3D TIs (Topological Insulators) like Bi₂Se₃ which are plagued by bulk conduction due to impurities. To better understand why the bulk of SmB6 is so different from conventional TIs, this study employed a new method, called inverted resistance, to explore the lowest temperatures, as well as standard Hall measurements for the rest of the temperature range. In the inverted resistance method, current flows from an inner contact to an outer ring, and voltage is measured outside of this outer ring. This geometry confines the surface current and allows for measurement of the bulk resistivity even when the conductive surface dominates transport (below 4 K). The results confirm that the bulk of SmB6 is truly insulating down to 2 K. Hall measurements on a number of samples show consistent bulk behavior from 4-40 K, but widely varying behavior among samples above 40 K. This is attributed to a combination of the growth process and purity of the starting material, and the relationship between the high and low temperature behaviors is still being explored.

Keywords: bulk transport, Hall effect, inverted resistance, Kondo insulator, samarium hexaboride, topological insulator

Procedia PDF Downloads 157
13100 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 73
13099 Impact of Ethnoscience-Based Teaching Approach: Thinking Relevance, Effectiveness and Learner Retention in Physics Concepts of Optics

Authors: Rose C.Anamezie, Mishack T. Gumbo

Abstract:

Physics learners’ poor retention, which culminates in poor achievement due to teaching approaches that are unrelated to learners’ in non-Western cultures, warranted the study. The tenet of this study was to determine the effectiveness of the ethnoscience-based teaching (EBT) approach on learners’ retention in the Physics concept of Optics in the Awka Education zone of Anambra State- Nigeria. Two research questions and three null hypotheses tested at a 0.05 level of significance guided the study. The design adopted for the study was Quasi-experimental. Specifically, a non-equivalent control group design was adopted. The population for the study was 4,825 SS2 Physics learners in the zone. 160 SS2 learners were sampled using purposive and random sampling. The experimental group was taught rectilinear propagation of light (RPL) using the EBT approach, while the control group was taught the same topic using the lecture method. The instrument for data collection was the 50 Physics Retention Test (PRT) which was validated by three experts and tested for reliability using Kuder-Richardson’s formula-20, which yielded coefficients of 0.81. The data were analysed using mean, standard deviation and analysis of co-variance (p< .05). The results showed higher retention for the use of the EBT approach than the lecture method, while there was no significant gender-based factor in the learners’ retention in Physics. It was recommended that the EBT approach, which bridged the gender gap in Physics retention, be adopted in secondary school teaching and learning since it could transform science teaching, enhance learners’ construction of new science concepts based on their existing knowledge and bridge the gap between Western science and learners’ worldviews.

Keywords: Ethnoscience-based teaching, optics, rectilinear propagation of light, retention

Procedia PDF Downloads 78
13098 Sea Protection: Using Marine Algae as a Natural Method of Absorbing Dye Textile Waste

Authors: Ariana Kilic, Serena Arapyan

Abstract:

Water pollution is a serious concern in all seas around the world and one major cause of it is dye textile wastes mixing with seawater. This common incident alters aquatic life, putting organisms’ lives in danger and deteriorating the water's nature. There is a significant need for a natural approach to reduce the amount of dye textile waste in seawater and ensure marine organisms' safety. Consequently, using marine algae is a viable solution since it can eliminate the excess waste by absorbing the dye. Also, marine algae are non-vascular that absorb water and nutrients, meaning that having them as absorbers is a natural process and no inorganic matters will be added to the seawater that could result in further pollution. To test the efficiency of this approach, the optical absorbance of the seawater samples was measured before and after the addition of marine algae by utilizing colorimetry. A colorimeter is used to find the concentration of a chemical compound in a solution by measuring the absorbance of the compound at a specific wavelength. Samples of seawater that have equal amounts of water were used and textile dye was added as the constant variables. The initial and final absorbances, the dependent variable, of the water were measured before and after the addition of marine algae, the independent variable, and observed. The lower the absorbance showed us that there is lower dye concentration and therefore, the marine algae had done its job by using and absorbing the dye. The same experiment was repeated with same amount of water but with different concentrations of dye in order to determine the maximum concentration of dye the marine algae can completely absorb. The diminished concentration of dye demonstrated that pollution caused by factories’ dye wastes could be prevented with the natural method of marine algae. The involvement of marine algae is an optimal strategy for having an organic solution to absorbing the dye wastes in seas and obstructing water pollution.

Keywords: water pollution, dye textile waste, marine algae, absorbance, colorimetry

Procedia PDF Downloads 15
13097 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 93
13096 Investigation of Electrospun Composites Nanofiber of Poly (Lactic Acid)/Hazelnut Shell Powder/Zinc Oxide

Authors: Ibrahim Sengor, Sumeyye Cesur, Ilyas Kartal, Faik Nuzhet Oktar, Nazmi Ekren, Ahmet Talat Inan, Oguzhan Gunduz

Abstract:

In recent years, many researchers focused on nano-size fiber production. Nanofibers have been studied due to their different and superior physical, chemical and mechanical properties. Poly (lactic acid) (PLA), is a type of biodegradable thermoplastic polyester derived from renewable sources used in biomedical owing to its biocompatibility and biodegradability. In addition, zinc oxide is an antibacterial material and hazelnut shell powder is a filling material. In this study, nanofibers were obtained by adding of different ratio Zinc oxide, (ZnO) and hazelnut shell powder at different concentration into Poly (lactic acid) (PLA) by using electrospinning method which is the most common method to obtain nanofibers. After dissolving the granulated polylactic acids in % 1,% 2,% 3 and% 4 with chloroform solvent, they are homogenized by adding tween and hazelnut shell powder at different ratios and then by electrospinning, nanofibers are obtained. Scanning electron microscope (SEM), Fourier transform infrared spectroscopy (FTIR), Differential scanning calorimeter (DSC) and physical analysis such as density, electrical conductivity, surface tension, viscosity measurement and antimicrobial test were carried out after production process. The resulting structures of the nanofiber possess antimicrobial and antiseptic properties, which are attractive for biomedical applications. The resulting structures of the nanofiber possess antimicrobial, non toxic, self-cleaning and rigid properties, which are attractive for biomedical applications.

Keywords: electrospinning, hazelnut shell powder, nanofibers, poly (lactic acid), zinc oxide

Procedia PDF Downloads 158
13095 Passive Voice in SLA: Armenian Learners’ Case Study

Authors: Emma Nemishalyan

Abstract:

It is believed that learners’ mother tongue (L1 hereafter) has a huge impact on their second language acquisition (L2 hereafter). This hypothesis has been exposed to both positive and negative criticism. Based on research results of a wide range of learners’ corpora (Chinese, Japanese, Spanish among others) the hypothesis has either been proved or disproved. However, no such study has been conducted on the Armenian learners. The aim of this paper is to understand the implication of the hypothesis on the Armenian learners’ corpus in terms of the use of the passive voice. To this end, the method of Contrastive Interlanguage Analysis (hereafter CIA) has been used on native speakers’ corpus (Louvain Corpus of Native English Essays (LOCNESS)) and Armenian learners’ corpus which has been compiled by me in compliance with International Corpus of Learner English (ICLE) guidelines. CIA compares the interlanguage (the language produced by learners) with the one produced by native speakers. With the help of this method, it is possible not only to highlight the mistakes that learners make, but also to underline the under or overuses. The choice of the grammar issue (passive voice) is conditioned by the fact that typologically Armenian and English are drastically different as they belong to different branches. Moreover, the passive voice is considered to be one of the most problematic grammar topics to be acquired by learners of the English language. Based on this difference, we hypothesized that Armenian learners would either overuse or underuse some types of the passive voice. With the help of Lancsbox software, we have identified the frequency rates of passive voice usage in LOCNESS and Armenian learners’ corpus to understand whether the latter have the same usage pattern of the passive voice as the native speakers. Secondly, we have identified the types of the passive voice used by the Armenian leaners trying to track down the reasons in their mother tongue. The results of the study showed that Armenian learners underused the passive voices in contrast to native speakers. Furthermore, the hypothesis that learners’ L1 has an impact on learners’ L2 acquisition and production was proved.

Keywords: corpus linguistics, applied linguistics, second language acquisition, corpus compilation

Procedia PDF Downloads 100
13094 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China

Authors: Feng Yue, Fei Dai

Abstract:

With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.

Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture

Procedia PDF Downloads 159