Search results for: information extraction evaluation method
30303 Validation of the X-Ray Densitometry Method for Radial Density Pattern Determination of Acacia seyal var. seyal Tree Species
Authors: Hanadi Mohamed Shawgi Gamal, Claus Thomas Bues
Abstract:
Wood density is a variable influencing many of the technological and quality properties of wood. Understanding the pattern of wood density radial variation is important for its end-use. The X-ray technique, traditionally applied to softwood species to assess the wood quality properties, due to its simple and relatively uniform wood structure. On the other hand, very limited information is available about the validation of using this technique for hardwood species. The suitability of using the X-ray technique for the determination of hardwood density has a special significance in countries like Sudan, where only a few timbers are well known. This will not only save the time consumed by using the traditional methods, but it will also enhance the investigations of the great number of the lesser known species, the thing which will fill the huge cap of lake information of hardwood species growing in Sudan. The current study aimed to evaluate the validation of using the X-ray densitometry technique to determine the radial variation of wood density of Acacia seyal var. seyal. To this, a total of thirty trees were collected randomly from four states in Sudan. The wood density radial trend was determined using the basic density as well as density obtained by the X-ray densitometry method in order to assess the validation of X-ray technique in wood density radial variation determination. The results showed that the pattern of radial trend of density obtained by X-ray technique is very similar to that achieved by basic density. These results confirmed the validation of using the X-ray technique for Acacia seyal var. seyal density radial trend determination. It also promotes the suitability of using this method in other hardwood species.Keywords: x-ray densitometry, wood density, Acacia seyal var. seyal, radial variation
Procedia PDF Downloads 15230302 Feasibility of Chicken Feather Waste as a Renewable Resource for Textile Dyeing Processes
Authors: Belayihun Missaw
Abstract:
Cotton cationization is an emerging area that solves the environmental problems associated with the reactive dyeing of cotton. In this study, keratin hydrolysate cationizing agent from chicken feather was extracted and optimized to eliminate the usage of salt during dyeing. Cationization of cotton using the extracted keratin hydrolysate and dyeing of the cationized cotton without salt was made. The effect of extraction parametric conditions like concentration of caustic soda, temperature and time were studied on the yield of protein from chicken feather and colour strength (K/S) values, and these process conditions were optimized. The optimum extraction conditions were. 25g/l caustic soda, at 500C temperature and 105 minutes with average yield = 91.2% and 4.32 colour strength value. The effect of salt addition, pH and concentration of cationizing agent on yield colour strength was also studied and optimized. It was observed that slightly acidic condition with 4% (% owf) concentration of cationizing agent gives a better dyeability as compared to normal cotton reactive dyeing. The physical properties of cationized-dyed fabric were assessed, and the result reveals that the cationization has a similar effect as normal dyeing of cotton. The cationization of cotton with keratin extract was found to be successful and economically viable.Keywords: cotton materials, cationization, reactive dye, keratin hydrolysate
Procedia PDF Downloads 6330301 Accessible Sustainability Assessment Tools and Approach of the University level Academic Programs
Authors: S. K. Ashiquer Rahman
Abstract:
The innovative knowledge threshold significantly shifted education from traditional to an online version which was an emergent state of arts for academic programs of any higher education institutions; the substantive situation thus raises the importance of deliberative integration of education, Knowledge, technology and sustainability as well as knowledge platforms, e.g., ePLANETe. In fact, the concept of 'ePLANETe' an innovative knowledge platform and its functionalities as an experimental digitized platform for contributing sustainable assessment of academic programs of higher education institution(HEI). Besides, this paper assessed and define the common sustainable development challenges of higher education(HE) and identified effective approach and tools of 'ePLANETe’ that is enable to practices sustainability assessment of academic programs through the deliberation methodologies. To investigate the effectiveness of knowledge tools and approach of 'ePLANETe’, I have studied sustainable challenges digitized pedagogical content as well as evaluation of academic programs of two public universities in France through the 'ePLANETe’ evaluation space. The investigation indicated that the effectiveness of 'ePLANETe’s tools and approach perfectly fit for the quality assessment of academic programs, implementation of sustainable challenges, and dynamic balance of ecosystem within the university communities and academic programs through 'ePLANETe’ evaluation process and space. The study suggests to the relevant higher educational institution’s authorities and policymakers could use this approach and tools for assessing sustainability and enhancing the sustainability competencies of academic programs for quality educationKeywords: ePLANETe, deliberation, evaluation, competencies
Procedia PDF Downloads 11330300 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks
Authors: Salman Naseer, Usman Zafar, Iqra Zafar
Abstract:
This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.Keywords: AODV, DSDV, DSR, Adhoc network
Procedia PDF Downloads 28630299 The Repetition of New Words and Information in Mandarin-Speaking Children: A Corpus-Based Study
Authors: Jian-Jun Gao
Abstract:
Repetition is used for a variety of functions in conversation. When young children first learn to speak, they often repeat words from the adult’s recent utterance with the learning and social function. The objective of this study was to ascertain whether the repetitions are equivalent in indicating attention to new words and the initial repeat of information in conversation. Based on the observation of naturally occurring language use in Taiwan Corpus of Child Mandarin (TCCM), the results in this study provided empirical support to the previous findings that children are more likely to repeat new words they are offered than to repeat new information. When children get older, there would be a drop in the repetition of both new words and new information.Keywords: acquisition, corpus, mandarin, new words, new information, repetition
Procedia PDF Downloads 14930298 Steel Bridge Coating Inspection Using Image Processing with Neural Network Approach
Authors: Ahmed Elbeheri, Tarek Zayed
Abstract:
Steel bridges deterioration has been one of the problems in North America for the last years. Steel bridges deterioration mainly attributed to the difficult weather conditions. Steel bridges suffer fatigue cracks and corrosion, which necessitate immediate inspection. Visual inspection is the most common technique for steel bridges inspection, but it depends on the inspector experience, conditions, and work environment. So many Non-destructive Evaluation (NDE) models have been developed use Non-destructive technologies to be more accurate, reliable and non-human dependent. Non-destructive techniques such as The Eddy Current Method, The Radiographic Method (RT), Ultra-Sonic Method (UT), Infra-red thermography and Laser technology have been used. Digital Image processing will be used for Corrosion detection as an Alternative for visual inspection. Different models had used grey-level and colored digital image for processing. However, color image proved to be better as it uses the color of the rust to distinguish it from the different backgrounds. The detection of the rust is an important process as it’s the first warning for the corrosion and a sign of coating erosion. To decide which is the steel element to be repainted and how urgent it is the percentage of rust should be calculated. In this paper, an image processing approach will be developed to detect corrosion and its severity. Two models were developed 1st to detect rust and 2nd to detect rust percentage.Keywords: steel bridge, bridge inspection, steel corrosion, image processing
Procedia PDF Downloads 30630297 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method
Authors: M. T. Tsepav, Y. Adamu, M. A. Umar
Abstract:
A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261 Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.Keywords: geoelectric survey, corrosivity, protective capacity, transmissivity
Procedia PDF Downloads 33930296 Effect of Acetic Acid Fermentation on Bioactive Components and Anti-Xanthine Oxidase Activities in Vinegar Brewed from Monascus-Fermented Soybeans
Authors: Kyung-Soon Choi, Ji-Young Hwang, Young-Hee Pyo
Abstract:
Vinegars have been used as an alternative remedy for treating gout, but the scientific basis remains to be elucidated. In this study, acetic acid fermentation was applied for the first time to Monascus-fermented soybeans to examine its effect on the bioactive components together with the xanthine oxidase inhibitory (XOI) activity of the soy vinegar. The content of total phenols (0.47~0.97 mg gallic acid equivalents/mL) and flavonoids (0.18~0.39 mg quercetin equivallents/mL) were spectrophotometrically determined, and the content of organic acid (10.22~59.76 mg/mL) and isoflavones (6.79~7.46 mg/mL) were determined using HPLC-UV. The analytical method for ubiquinones (0.079~0.276 μg/mL) employed saponification before solvent extraction and quantification using LC-MS. Soy vinegar also showed significant XOI (95.3%) after 20 days of acetic acid fermentation at 30 °C. The results suggest that soy vinegar has potential as a novel medicinal food.Keywords: acetic acid fermentation, bioactive component, soy vinegar, xanthine oxidase inhibitory activity
Procedia PDF Downloads 38330295 Challenges in Early Diagnosis of Enlarged Vestibular Aqueduct (EVA) in Pediatric Population: A Single Case Report
Authors: Asha Manoharan, Sooraj A. O, Anju K. G
Abstract:
Enlarged vestibular aqueduct (EVA) refers to the presence of congenital sensorineural hearing loss with an enlarged vestibular aqueduct. The Audiological symptoms of EVA are fluctuating and progressive in nature and the diagnosis of EVAS can be confirmed only with radiological evaluation. Hence it is difficult to differentiate EVA from conditions like Meniere’s disease, semi-circular dehiscence, etc based on audiological findings alone. EVA in adults is easy to identify due to distinct vestibular symptoms. In children, EVA can remain either unidentified or misdiagnosed until the vestibular symptoms are evident. Motor developmental delay, especially the ones involving a change of body alignment, has been reported in the pediatric population with EVA. So, it should be made mandatory to recommend radiological evaluation in young children with fluctuating hearing loss reporting with motor developmental delay. This single case study of a baby with Enlarged Vestibular Aqueduct (EVA) primarily aimed to address the following: a) Challenges while diagnosing young patients with EVA and fluctuating hearing loss, b) Importance of radiological evaluation in audiological diagnosis in the pediatric population, c) Need for regular monitoring of hearing, hearing aid performance, and cochlear implant mapping closely for potential fluctuations in such populations, d) Importance of reviewing developmental, language milestones in very young children with fluctuating hearing loss.Keywords: enlarged vestibular aqueduct (EVA), motor delay, radiological evaluation, fluctuating hearing loss, cochlear implant
Procedia PDF Downloads 16730294 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis
Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu
Abstract:
Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding
Procedia PDF Downloads 16730293 Top-Down Construction Method in Concrete Structures: Advantages and Disadvantages of This Construction Method
Authors: Hadi Rouhi Belvirdi
Abstract:
The construction of underground structures using the traditional method, which begins with excavation and the implementation of the foundation of the underground structure, continues with the construction of the main structure from the ground up, and concludes with the completion of the final ceiling, is known as the Bottom-Up Method. In contrast to this method, there is an advanced technique called the Top-Down Method, which has practically replaced the traditional construction method in large projects in industrialized countries in recent years. Unlike the traditional approach, this method starts with the construction of surrounding walls, columns, and the final ceiling and is completed with the excavation and construction of the foundation of the underground structure. Some of the most significant advantages of this method include the elimination or minimization of formwork surfaces, the removal of temporary bracing during excavation, the creation of some traffic facilities during the construction of the structure, and the possibility of using it in limited and high-traffic urban spaces. Despite these numerous advantages, unfortunately, there is still insufficient awareness of this method in our country, to the extent that it can be confidently stated that most stakeholders in the construction industry are unaware of the existence of such a construction method. However, it can be utilized as a very important execution option alongside other conventional methods in the construction of underground structures. Therefore, due to the extensive practical capabilities of this method, this article aims to present a methodology for constructing underground structures based on the aforementioned advanced method to the scientific community of the country, examine the advantages and limitations of this method and their impacts on time and costs, and discuss its application in urban spaces. Finally, some underground structures executed in the Ahvaz urban rail, which are being implemented using this advanced method to the best of our best knowledge, will be introduced.Keywords: top-down method, bottom-up method, underground structure, construction method
Procedia PDF Downloads 1230292 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design
Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan
Abstract:
Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.Keywords: banking system, Data Envelopment Analysis (DEA), Integrated Resilience Engineering (IRE), performance evaluation, perturbation analysis
Procedia PDF Downloads 18830291 Tool Development for Assessing Antineoplastic Drugs Surface Contamination in Healthcare Services and Other Workplaces
Authors: Benoit Atge, Alice Dhersin, Oscar Da Silva Cacao, Beatrice Martinez, Dominique Ducint, Catherine Verdun-Esquer, Isabelle Baldi, Mathieu Molimard, Antoine Villa, Mireille Canal-Raffin
Abstract:
Introduction: Healthcare workers' exposure to antineoplastic drugs (AD) is a burning issue for occupational medicine practitioners. Biological monitoring of occupational exposure (BMOE) is an essential tool for assessing AD contamination of healthcare workers. In addition to BMOE, surface sampling is a useful tool in order to understand how workers get contaminated, to identify sources of environmental contamination, to verify the effectiveness of surface decontamination way and to ensure monitoring of these surfaces. The objective of this work was to develop a complete tool including a kit for surface sampling and a quantification analytical method for AD traces detection. The development was realized with the three following criteria: the kit capacity to sample in every professional environment (healthcare services, veterinaries, etc.), the detection of very low AD traces with a validated analytical method and the easiness of the sampling kit use regardless of the person in charge of sampling. Material and method: AD mostly used in term of quantity and frequency have been identified by an analysis of the literature and consumptions of different hospitals, veterinary services, and home care settings. The kind of adsorbent device, surface moistening solution and mix of solvents for the extraction of AD from the adsorbent device have been tested for a maximal yield. The AD quantification was achieved by an ultra high-performance liquid chromatography method coupled with tandem mass spectrometry (UHPLC-MS/MS). Results: With their high frequencies of use and their good reflect of the diverse activities through healthcare, 15 AD (cyclophosphamide, ifosfamide, doxorubicin, daunorubicin, epirubicin, 5-FU, dacarbazin, etoposide, pemetrexed, vincristine, cytarabine, methothrexate, paclitaxel, gemcitabine, mitomycin C) were selected. The analytical method was optimized and adapted to obtain high sensitivity with very low limits of quantification (25 to 5000ng/mL), equivalent or lowest that those previously published (for 13/15 AD). The sampling kit is easy to use, provided with a didactic support (online video and protocol paper). It showed its effectiveness without inter-individual variation (n=5/person; n= 5 persons; p=0,85; ANOVA) regardless of the person in charge of sampling. Conclusion: This validated tool (sampling kit + analytical method) is very sensitive, easy to use and very didactic in order to control the chemical risk brought by AD. Moreover, BMOE permits a focal prevention. Used in routine, this tool is available for every intervention of occupational health.Keywords: surface contamination, sampling kit, analytical method, sensitivity
Procedia PDF Downloads 13230290 The Isolation and Performance Evaluation of Yeast (Saccharomyces cerevisiae) from Raffia Palm (Raphia hookeri) Wine Used at Different Concentrations for Proofing of Bread Dough
Authors: Elizabeth Chinyere Amadi
Abstract:
Yeast (sacchoromyces cerevisiae) was isolated from the fermenting sap of raffia palm (Raphia hookeri) wine. Different concerntrations of the yeast isolate were used to produce bread samples – B, C, D, E, F containing (2, 3, 4, 5, 6) g of yeast isolate respectively, other ingredients were kept constant. Sample A, containing 2g of commercial baker yeast served as control. The proof heights, weights, volumes and specific volume of the dough and bread samples were determined. The bread samples were also subjected to sensory evaluation using a 9–point hedonic scale. Results showed that proof height increased with increased concentration of the yeast isolate; that is direct proportion. Sample B with the least concentration of the yeast isolate had the least loaf height and volume of 2.80c m and 200 cm³ respectively but exhibited the highest loaf weight of 205.50g. However, Sample A, (commercial bakers’ yeast) had the highest loaf height and volume of 5.00 cm and 400 cm³ respectively. The sensory evaluation results showed sample D compared favorably with sample A in all the organoleptic attributes-(appearance, taste, crumb texture, crust colour and overall acceptability) tested for (P< 0.05). It was recommended that 4g compressed yeast isolate per 100g flour could be used to proof dough as a substitute for commercial bakers’ yeast and produce acceptable bread loaves.Keywords: isolation of yeast, performance evaluation of yeast, Raffia palm wine, used at different concentrations, proofing of bread dough
Procedia PDF Downloads 31830289 The Bidirectional Effect between Parental Burnout and the Child’s Internalized and/or Externalized Behaviors
Authors: Aline Woine, Moïra Mikolajczak, Virginie Dardier, Isabelle Roskam
Abstract:
Background information: Becoming a parent is said to be the happiest event one can ever experience in one’s life. This popular (and almost absolute) truth–which no reasonable and decent human being would ever dare question on pain of being singled out as a bad parent–contrasts with the nuances that reality offers. Indeed, while many parents do thrive in their parenting role, some others falter and become progressively overwhelmed by their parenting role, ineluctably caught in a spiral of exhaustion. Parental burnout (henceforth PB) sets in when parental demands (stressors) exceed parental resources. While it is now generally acknowledged that PB affects the parent’s behavior in terms of neglect and violence toward their offspring, little is known about the impact that the syndrome might have on the children’s internalized (anxious and depressive symptoms, somatic complaints, etc.) and/or externalized (irritability, violence, aggressiveness, conduct disorder, oppositional disorder, etc.) behaviors. Furthermore, at the time of writing, to our best knowledge, no research has yet tested the reverse effect, namely, that of the child's internalized and/or externalized behaviors on the onset and/or maintenance of parental burnout symptoms. Goals and hypotheses: The present pioneering research proposes to fill an important gap in the existing literature related to PB by investigating the bidirectional effect between PB and the child’s internalized and/or externalized behaviors. Relying on a cross-lagged longitudinal study with three waves of data collection (4 months apart), our study tests a transactional model with bidirectional and recursive relations between observed variables and at the three waves, as well as autoregressive paths and cross-sectional correlations. Methods: As we write this, wave-two data are being collected via Qualtrics, and we expect a final sample of about 600 participants composed of French-speaking (snowball sample) and English-speaking (Prolific sample) parents. Structural equation modeling is employed using Stata version 17. In order to retain as much statistical power as possible, we use all available data and therefore apply the maximum likelihood with a missing value (mlmv) as the method of estimation to compute the parameter estimates. To limit (in so far is possible) the shared method variance bias in the evaluation of the child’s behavior, the study relies on a multi-informant evaluation approach. Expected results: We expect our three-wave longitudinal study to show that PB symptoms (measured at T1) raise the occurrence/intensity of the child’s externalized and/or internalized behaviors (measured at T2 and T3). We further expect the child’s occurrence/intensity of externalized and/or internalized behaviors (measured at T1) to augment the risk for PB (measured at T2 and T3). Conclusion: Should our hypotheses be confirmed, our results will make an important contribution to the understanding of both PB and children’s behavioral issues, thereby opening interesting theoretical and clinical avenues.Keywords: exhaustion, structural equation modeling, cross-lagged longitudinal study, violence and neglect, child-parent relationship
Procedia PDF Downloads 7330288 Epileptic Seizure Prediction Focusing on Relative Change in Consecutive Segments of EEG Signal
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
Epilepsy is a common neurological disorders characterized by sudden recurrent seizures. Electroencephalogram (EEG) is widely used to diagnose possible epileptic seizure. Many research works have been devoted to predict epileptic seizure by analyzing EEG signal. Seizure prediction by analyzing EEG signals are challenging task due to variations of brain signals of different patients. In this paper, we propose a new approach for feature extraction based on phase correlation in EEG signals. In phase correlation, we calculate relative change between two consecutive segments of an EEG signal and then combine the changes with neighboring signals to extract features. These features are then used to classify preictal/ictal and interictal EEG signals for seizure prediction. Experiment results show that the proposed method carries good prediction rate with greater consistence for the benchmark data set in different brain locations compared to the existing state-of-the-art methods.Keywords: EEG, epilepsy, phase correlation, seizure
Procedia PDF Downloads 30930287 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 14930286 Information Requirements for Vessel Traffic Service Operations
Authors: Fan Li, Chun-Hsien Chen, Li Pheng Khoo
Abstract:
Operators of vessel traffic service (VTS) center provides three different types of services; namely information service, navigational assistance and traffic organization to vessels. To provide these services, operators monitor vessel traffic through computer interface and provide navigational advice based on the information integrated from multiple sources, including automatic identification system (AIS), radar system, and closed circuit television (CCTV) system. Therefore, this information is crucial in VTS operation. However, what information the VTS operator actually need to efficiently and properly offer services is unclear. The aim of this study is to investigate into information requirements for VTS operation. To achieve this aim, field observation was carried out to elicit the information requirements for VTS operation. The study revealed that the most frequent and important tasks were handling arrival vessel report, potential conflict control and abeam vessel report. Current location and vessel name were used in all tasks. Hazard cargo information was particularly required when operators handle arrival vessel report. The speed, the course, and the distance of two or several vessels were only used in potential conflict control. The information requirements identified in this study can be utilized in designing a human-computer interface that takes into consideration what and when information should be displayed, and might be further used to build the foundation of a decision support system for VTS.Keywords: vessel traffic service, information requirements, hierarchy task analysis, field observation
Procedia PDF Downloads 25130285 Evaluation and Selection of Drilling Technologies: An Application of Portfolio Analysis Matrix in South Azadgan Oilfield
Authors: M. Maleki Sadabad, A. Pointing, N. Marashi
Abstract:
With respect to the role and increasing importance of technology for countries development, in recent decades technology development has paid attention in a systematic form. Nowadays the markets face with highly complicated and competitive conditions in foreign markets, therefore, evaluation and selection of technology effectiveness and also formulating technology strategy have changed into a vital subject for some organizations. The study introduces the standards of empowerment evaluation and technology attractiveness especially strategic technologies which explain the way of technology evaluation, selection and finally formulating suitable technology strategy in the field of drilling in South Azadegan oil field. The study firstly identifies the key challenges of oil fields in order to evaluate the technologies in field of drilling in South Azadegan oil field through an interview with the experts of industry and then they have been prioritised. In the following, the existing and new technologies were identified to solve the challenges of South Azadegan oil field. In order to explore the ability, availability, and attractiveness of every technology, a questionnaire based on Julie indices has been designed and distributed among the industry elites. After determining the score of ability, availability and attractiveness, every technology which has been obtained by the average of expert’s ideas, the technology package has been introduced by Morin’s model. The matrix includes four areas which will follow the especial strategy. Finally, by analysing the above matrix, the technology options have been suggested in order to select and invest.Keywords: technology, technology identification, drilling technologies, technology capability
Procedia PDF Downloads 14330284 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8930283 Value Co-Creation Model for Relationships Management
Authors: Kolesnik Nadezda A.
Abstract:
The research aims to elaborate inter-organizational network relationships management model to maximize value co-creation. We propose a network management framework that requires evaluation of network partners with respect to their position and role in network; and elaboration of appropriate relationship development strategy with partners in network. Empirical research and approval is based on the case study method, including structured in-depth interviews with the companies from b2b market.Keywords: inter-organizational networks, value co-creation, model, B2B market
Procedia PDF Downloads 45630282 Evaluating Factors Influencing Information Quality in Large Firms
Authors: B. E. Narkhede, S. K. Mahajan, B. T. Patil, R. D. Raut
Abstract:
Information quality is a major performance measure for an Enterprise Resource Planning (ERP) system of any firm. This study identifies various critical success factors of information quality. The effect of various critical success factors like project management, reengineering efforts and interdepartmental communications on information quality is analyzed using a multiple regression model. Here quantitative data are collected from respondents from various firms through structured questionnaire for assessment of the information quality, project management, reengineering efforts and interdepartmental communications. The validity and reliability of the data are ensured using techniques like factor analysis, computing of Cronbach’s alpha. This study gives relative importance of each of the critical success factors. The findings suggest that among the various factors influencing information quality careful reengineering efforts are the most influencing factor. This paper gives clear insight to managers and practitioners regarding the relative importance of critical success factors influencing information quality so that they can formulate a strategy at the beginning of ERP system implementation.Keywords: Enterprise Resource Planning (ERP), information systems (IS), multiple regression, information quality
Procedia PDF Downloads 33330281 Ionometallurgy for Recycling Silver in Silicon Solar Panel
Authors: Emmanuel Billy
Abstract:
This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver
Procedia PDF Downloads 24730280 Multimodal Ophthalmologic Evaluation Can Detect Retinal Injuries in Asymptomatic Patients With Primary Antiphospholipid Syndrome
Authors: Taurino S. R. Neto, Epitácio D. S. Neto, Flávio Signorelli, Gustavo G. M. Balbi, Alex H. Higashi, Mário Luiz R. Monteiro, Eloisa Bonfá, Danieli C. O. Andrade, Leandro C. Zacharias
Abstract:
Purpose: To perform a multimodal evaluation, including the use of Optical Coherence Angiotomography (OCTA), in patients with primary antiphospholipid syndrome (PAPS) without ocular complaints and to compare them with healthy individuals. Methods: A complete structural and functional ophthalmological evaluation using OCTA and microperimetry (MP) exam in patients with PAPS, followed at a tertiary rheumatology outpatient clinic, was performed. All ophthalmologic manifestations were recorded and then statistical analysis was performed for comparative purposes; p <0.05 was considered statistically significant. Results: 104 eyes of 52 subjects (26 patients with PAPS without ocular complaints and 26 healthy individuals) were included. Among PAPS patients, 21 were female (80.8%) and 21 (80.8%) were Caucasians. Thrombotic PAPS was the main clinical criteria manifestation (100%); 65.4% had venous and 34.6% had arterial thrombosis. Obstetrical criteria were present in 34.6% of all thrombotic PAPS patients. Lupus anticoagulant was present in all patients. 19.2% of PAPS patients presented ophthalmologic findings against none of the healthy individuals. The most common retinal change was paracentral acute middle maculopathy (PAMM) (3 patients, 5 eyes), followed by drusen-like deposits (1 patient, 2 eyes) and pachychoroid pigment epitheliopathy (1 patient, 1 eye). Systemic hypertension and hyperlipidaemia were present in 100% of the PAPS patients with PAMM, while only six patients (26.1%) with PAPS without PAMM presented these two risk factors together. In the quantitative OCTA evaluation, we found significant differences between PAPS patients and controls in both the superficial vascular complex (SVC) and deep vascular complex (DVC) in the high-speed protocol, as well as in the SVC in the high-resolution protocol. In the analysis of the foveal avascular zone (FAZ) parameters, the PAPS group had a larger area of FAZ in the DVC using the high-speed method compared to the control group (p=0.047). In the quantitative analysis of the MP, the PAPS group had lower central (p=0.041) and global (p<0.001) retinal sensitivity compared to the control group, as well as in the sector analysis, with the exception of the inferior sector. In the quantitative evaluation of fixation stability, there was a trend towards worse stability in the PAPS subgroup with PAMM in both studied methods. Conclusions: PAMM was observed in 11.5% of PAPS patients with no previous ocular complaints. Systemic hypertension concomitant with hyperlipidemia was the most commonly associated risk factor for PAMM in patients with PAPS. PAPS patients present lower vascular density and retinal sensitivity compared to the control group, even in patients without PAMM.Keywords: antiphospholipid syndrome, optical coherence angio tomography, optical coherence tomography, retina
Procedia PDF Downloads 8030279 The Role of Management Information Systems in the Strategic Management of Institutions of Higher Education
Authors: Szilvia Vincze, Zoltán Bács
Abstract:
It has become increasingly important for institutions of higher education as well to use available resources as effectively as possible for the implementation of the institution’s strategic plans and, at the same time, to ensure a stable future. This is the responsibility of the management and administration of the institution. Having access to complete and comprehensive information is indispensable for making dynamic and well-founded decisions that consider the realization of objectives to be primary and that manage possibly emerging risks, etc. The present paper introduces the role of Management Information Systems (MIS) at the University of Debrecen, one of the largest institutions of higher education in Hungary, and also discusses the utilization of this and associated information systems in management functions.Keywords: management information system (MIS), higher education, Hungary, strategy formulation
Procedia PDF Downloads 50530278 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.Keywords: quasigeoid, gravity aomalies, covariance, GGM
Procedia PDF Downloads 13730277 The Use of Geographically Weighted Regression for Deforestation Analysis: Case Study in Brazilian Cerrado
Authors: Ana Paula Camelo, Keila Sanches
Abstract:
The Geographically Weighted Regression (GWR) was proposed in geography literature to allow relationship in a regression model to vary over space. In Brazil, the agricultural exploitation of the Cerrado Biome is the main cause of deforestation. In this study, we propose a methodology using geostatistical methods to characterize the spatial dependence of deforestation in the Cerrado based on agricultural production indicators. Therefore, it was used the set of exploratory spatial data analysis tools (ESDA) and confirmatory analysis using GWR. It was made the calibration a non-spatial model, evaluation the nature of the regression curve, election of the variables by stepwise process and multicollinearity analysis. After the evaluation of the non-spatial model was processed the spatial-regression model, statistic evaluation of the intercept and verification of its effect on calibration. In an analysis of Spearman’s correlation the results between deforestation and livestock was +0.783 and with soybeans +0.405. The model presented R²=0.936 and showed a strong spatial dependence of agricultural activity of soybeans associated to maize and cotton crops. The GWR is a very effective tool presenting results closer to the reality of deforestation in the Cerrado when compared with other analysis.Keywords: deforestation, geographically weighted regression, land use, spatial analysis
Procedia PDF Downloads 36330276 Interoperability Model Design of Smart Grid Power System
Authors: Seon-Hack Hong, Tae-Il Choi
Abstract:
Interoperability is defined as systems, components, and devices developed by different entities smoothly exchanging information and functioning organically without mutual consultation, being able to communicate with each other and computer systems of the same type or different types, and exchanging information or the ability of two or more systems to exchange information and use the information exchanged without extra effort. Insufficiencies such as duplication of functions when developing systems and applications due to lack of interoperability in the electric power system and low efficiency due to a lack of mutual information transmission system between the inside of the application program and the design is improved, and the seamless linkage of newly developed systems is improved. Since it is necessary to secure interoperability for this purpose, we designed the smart grid-based interoperability standard model in this paper.Keywords: interoperability, power system, common information model, SCADA, IEEE2030, Zephyr
Procedia PDF Downloads 12430275 Cultural and Natural Heritage Conservation by GIS Tourism Inventory System Project
Authors: Gamze Safak, Umut Arslanoglu
Abstract:
Cultural and tourism conservation and development zones and tourism centers are the boundaries declared for the purpose of protecting, using, and evaluating the sectoral development and planned development in areas where historical and cultural values are heavily involved and/or where tourism potential is high. The most rapidly changing regions in Turkey are tourism areas, especially the coastal areas. Planning these regions is not about only an economic gain but also a natural and physical environment and refers to a complex process. If the tourism sector is not well controlled, excessive use of natural resources and wrong location choices may cause damage to natural areas, historical values, and socio-cultural structure. Since the strategic decisions taken in the environmental order and zoning plans, which are the means of guiding the physical environment of the Ministry of Culture and Tourism, which have the authority to make plans in tourism centers, are transformed into plan decisions that find the spatial expression, comprehensive evaluation of all kinds of data, following the historical development and based on the correct and current data is required. In addition, the authority has a number of competences in tourism promotion as well as the authority to plan, leading to the necessity of taking part in the applications requiring complex analysis such as the management and integration of the country's economic, political, social and cultural resources. For this purpose, Tourism Inventory System (TES) project, which consists of a series of subsystems, has been developed in order to solve complex planning and method problems in the management of site-related information. The scope of the project is based on the integration of numerical and verbal data in the regions within the jurisdiction of the authority, and the monitoring of the historical development of urban planning studies, making the spatial data of the institution easily accessible, shared, questionable and traceable in international standards. A dynamic and continuous system design has been put into practice by utilizing the advantage of the use of Geographical Information Systems in the planning process to play a role in making the right decisions, revealing the tools of social, economic, cultural development, and preservation of natural and cultural values. This paper, which is prepared by the project team members in TES (Tourism Inventory System), will present a study regarding the applicability of GIS in cultural and natural heritage conservation.Keywords: cultural conservation, GIS, geographic information system, tourism inventory system, urban planning
Procedia PDF Downloads 11930274 Measuring the Quality of Business Education: Employment Readiness Assessment
Authors: Gulbakhyt Sultanova
Abstract:
Business education institutions assess the progress of their students by giving them grades for courses completed and calculating a Grade Point Average (GPA). Whether the participation in these courses has led to the development of competences enabling graduates to successfully compete in the labor market should be measured using a new index: Employment Readiness Assessment (ERA). The higher the ERA, the higher the quality of education at a business school. This is applied, empirical research conducted by using a method of linear optimization. The aim of research is to identify factors which lead to the minimization of the deviation of GPA from ERA as well as to the maximization of ERA. ERA is composed of three components resulting from testing proficiency in Business English, testing work and personal skills, and job interview simulation. The quality of education is improving if GPA approximates ERA and ERA increases. Factors which have had a positive effect on quality enhancement are academic mobility of students and staff, practical-oriented courses taught by staff with work experience, and research-based courses taught by staff with research experience. ERA is a better index to measure the quality of business education than traditional indexes such as GPA due to its greater accuracy in assessing the level of graduates’ competences demanded in the labor market. Optimizing the educational process in pursuit of quality enhancement, ERA has to be used in parallel with GPA to find out which changes worked and resulted in improvement.Keywords: assessment and evaluation, competence evaluation, education quality, employment readiness
Procedia PDF Downloads 445