Search results for: abundance estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2403

Search results for: abundance estimation

783 IPO Valuation and Profitability Expectations: Evidence from the Italian Exchange

Authors: Matteo Bonaventura, Giancarlo Giudici

Abstract:

This paper analyses the valuation process of companies listed on the Italian Exchange in the period 2000-2009 at their Initial Public Offering (IPO). One the most common valuation techniques declared in the IPO prospectus to determine the offer price is the Discounted Cash Flow (DCF) method. We develop a ‘reverse engineering’ model to discover the short term profitability implied in the offer prices. We show that there is a significant optimistic bias in the estimation of future profitability compared to ex-post actual realization and the mean forecast error is substantially large. Yet we show that such error characterizes also the estimations carried out by analysts evaluating non-IPO companies. The forecast error is larger the faster has been the recent growth of the company, the higher is the leverage of the IPO firm, the more companies issued equity on the market. IPO companies generally exhibit better operating performance before the listing, with respect to comparable listed companies, while after the flotation they do not perform significantly different in term of return on invested capital. Pre-IPO book building activity plays a significant role in partially reducing the forecast error and revising expectations, while the market price of the first day of trading does not contain information for further reducing forecast errors.

Keywords: initial public offerings, DCF, book building, post-IPO profitability drop

Procedia PDF Downloads 352
782 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties

Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra

Abstract:

Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.

Keywords: enzymatic treatment, Jamun, optimization, physicochemical property, sensory analysis

Procedia PDF Downloads 296
781 Modelling Home Appliances for Energy Management System: Comparison of Simulation Results with Measurements

Authors: Aulon Shabani, Denis Panxhi, Orion Zavalani

Abstract:

This paper presents the modelling and development of a simulator for residential electrical appliances. The simulator is developed on MATLAB providing the possibility to analyze and simulate energy consumption of frequently used home appliances in Albania. Modelling of devices considers the impact of different factors, mentioning occupant behavior and climacteric conditions. Most devices are modeled as an electric circuit, and the electric energy consumption is estimated by the solutions of the guiding differential equations. The provided models refer to devices like a dishwasher, oven, water heater, air conditioners, light bulbs, television, refrigerator water, and pump. The proposed model allows us to simulate beforehand the energetic behavior of the largest consumption home devices to estimate peak consumption and improving its reduction. Simulated home prototype results are compared to real measurement of a considered typical home. Obtained results from simulator framework compared to monitored typical household using EmonTxV3 show the effectiveness of the proposed simulation. This conclusion will help for future simulation of a large group of typical household for a better understanding of peak consumption.

Keywords: electrical appliances, energy management, modelling, peak estimation, simulation, smart home

Procedia PDF Downloads 164
780 Whole Coding Genome Inter-Clade Comparisons to Predict Global Cancer-Protecting Variants

Authors: Lamis Naddaf, Yuval Tabach

Abstract:

We identified missense genetic variants with the potential to enhance resistance against cancer. Such a field has not been widely explored as researchers tend to investigate the mutations that cause diseases, in response to the suffering of patients, rather than those mutations that protect from them. In conjunction with the genomic revolution and the advances in genetic engineering and synthetic biology, identifying the protective variants will increase the power of genotype-phenotype predictions and have significant implications for improved risk estimation, diagnostics, prognosis, and even personalized therapy and drug discovery. To approach our goal, we systematically investigated the sites of the coding genomes and selected the alleles that showed a correlation with the species’ cancer resistance. Interestingly, we found several amino acids that are more generally preferred (like the Proline) or avoided (like the Cysteine) by the resistant species. Furthermore, Cancer resistance in mammals and reptiles is significantly predicted by the number of the predicted protecting variants (PVs) a species has. Moreover, PVs-enriched-genes are enriched in pathways relevant to tumor suppression. For example, they are enriched in the Hedgehog signaling and silencing pathways, which its improper activation is associated with the most common form of cancer malignancy. We also showed that the PVs are mostly more abundant in healthy people compared to cancer patients within different human races.

Keywords: cancer resistance, protecting variant, naked mole rat, comparative genomics

Procedia PDF Downloads 111
779 Monitoring Key Biomarkers Related to the Risk of Low Breastmilk Production in Women, Leading to a Positive Impact in Infant’s Health

Authors: R. Sanchez-Salcedo, N. H. Voelcker

Abstract:

Currently, low breast milk production in women is one of the leading health complications in infants. Recently, It has been demonstrated that exclusive breastfeeding, especially up to a minimum of 6 months, significantly reduces respiratory and gastrointestinal infections, which are the main causes of death in infants. However, the current data shows that a high percentage of women stop breastfeeding their children because they perceive an inadequate supply of milk, and only 45% of children are breastfeeding under 6 months. It is, therefore, clear the necessity to design and develop a biosensor that is sensitive and selective enough to identify and validate a panel of milk biomarkers that allow the early diagnosis of this condition. In this context, electrochemical biosensors could be a powerful tool for assessing all the requirements in terms of reliability, selectivity, sensitivity, cost efficiency and potential for multiplex detection. Moreover, they are suitable for the development of POC devices and wearable sensors. In this work, we report the development of two types of sensing platforms towards several biomarkers, including miRNAs and hormones present in breast milk and dysregulated in this pathological condition. The first type of sensing platform consists of an enzymatic sensor for the detection of lactose, one of the main components in milk. In this design, we used gold surface as an electrochemical transducer due to the several advantages, such as the variety of strategies available for its rapid and efficient functionalization with bioreceptors or capture molecules. For the second type of sensing platform, nanoporous silicon film (pSi) was chosen as the electrode material for the design of DNA sensors and aptasensors targeting miRNAs and hormones, respectively. pSi matrix offers a large superficial area with an abundance of active sites for the immobilization of bioreceptors and tunable characteristics, which increase the selectivity and specificity, making it an ideal alternative material. The analytical performance of the designed biosensors was not only characterized in buffer but also validated in minimally treated breastmilk samples. We have demonstrated the potential of an electrochemical transducer on pSi and gold surface for monitoring clinically relevant biomarkers associated with the heightened risk of low milk production in women. This approach, in which the nanofabrication techniques and the functionalization methods were optimized to increase the efficacy of the biosensor highly provided a foundation for further research and development of targeted diagnosis strategies.

Keywords: biosensors, electrochemistry, early diagnosis, clinical markers, miRNAs

Procedia PDF Downloads 18
778 Estimation of Lungs Physiological Motion for Patient Undergoing External Lung Irradiation

Authors: Yousif Mohamed Y. Abdallah

Abstract:

This is an experimental study deals with detection, measurement and analysis of the periodic physiological organ motion during external beam radiotherapy; to improve the accuracy of the radiation field placement, and to reduce the exposure of healthy tissue during radiation treatments. The importance of this study is to detect the maximum path of the mobile structures during radiotherapy delivery, to define the planning target volume (PTV) and irradiated volume during both inspiration and expiration period and to verify the target volume. In addition to its role to highlight the importance of the application of Intense Guided Radiotherapy (IGRT) methods in the field of radiotherapy. The results showed (body contour was equally (3.17 + 0.23 mm), for left lung displacement reading (2.56 + 0.99 mm) and right lung is (2.42 + 0.77 mm) which the radiation oncologist to take suitable countermeasures in case of significant errors. In addition, the use of the image registration technique for automatic position control is predicted potential motion. The motion ranged between 2.13 mm and 12.2 mm (low and high). In conclusion, individualized assessment of tumor mobility can improve the accuracy of target areas definition in patients undergo Sterostatic RT for stage I, II and III lung cancer (NSCLC). Definition of the target volume based on a single CT scan with a margin of 10 mm is clearly inappropriate.

Keywords: respiratory motion, external beam radiotherapy, image processing, lung

Procedia PDF Downloads 536
777 Exploration of in-situ Product Extraction to Increase Triterpenoid Production in Saccharomyces Cerevisiae

Authors: Mariam Dianat Sabet Gilani, Lars M. Blank, Birgitta E. Ebert

Abstract:

Plant-derived lupane-type, pentacyclic triterpenoids are biologically active compounds that are highly interesting for applications in medical, pharmaceutical, and cosmetic industries. Due to the low abundance of these valuable compounds in their natural sources, and the environmentally harmful downstream process, alternative production methods, such as microbial cell factories, are investigated. Engineered Saccharomyces cerevisiae strains, harboring the heterologous genes for betulinic acid synthesis, can produce up to 2 g L-1 triterpenoids, showing high potential for large-scale production of triterpenoids. One limitation of the microbial synthesis is the intracellular product accumulation. It not only makes cell disruption a necessary step in the downstream processing but also limits productivity and product yield per cell. To overcome these restrictions, the aim of this study is to develop an in-situ extraction method, which extracts triterpenoids into a second organic phase. Such a continuous or sequential product removal from the biomass keeps the cells in an active state and enables extended production time or biomass recycling. After screening of twelve different solvents, selected based on product solubility, biocompatibility, as well as environmental and health impact, isopropyl myristate (IPM) was chosen as a suitable solvent for in-situ product removal from S. cerevisiae. Impedance-based single-cell analysis and off-gas measurement of carbon dioxide emission showed that cell viability and physiology were not affected by the presence of IPM. Initial experiments demonstrated that after the addition of 20 vol % IPM to cultures in the stationary phase, 40 % of the total produced triterpenoids were extracted from the cells into the organic phase. In future experiments, the application of IPM in a repeated batch process will be tested, where IPM is added at the end of each batch run to remove triterpenoids from the cells, allowing the same biocatalysts to be used in several sequential batch steps. Due to its high biocompatibility, the amount of IPM added to the culture can also be increased to more than 20 vol % to extract more than 40 % triterpenoids in the organic phase, allowing the cells to produce more triterpenoids. This highlights the potential for the development of a continuous large-scale process, which allows biocatalysts to produce intracellular products continuously without the necessity of cell disruption and without limitation of the cell capacity.

Keywords: betulinic acid, biocompatible solvent, in-situ extraction, isopropyl myristate, process development, secondary metabolites, triterpenoids, yeast

Procedia PDF Downloads 153
776 Factors Affecting the Profitability of Commercial Banks: An Empirical Study of Indian Banking Sector

Authors: Neeraj Gupta, Jitendra Mahakud

Abstract:

The banking system plays a major role in the Indian economy. Banking system is the payment gateway of most of the financial transactions. Banking has gone a major transition that is still in progress. Recent banking reforms after liberalization in 1991 have led to the establishment of the foreign banks in the country. The foreign banks are not listed in the Indian stock markets and have increased the competition leading to the capture of the significant share in the revenue from the public sector banks which are still the major players in the Indian banking sector. The performance of the banking sector depends on the internal (bank specific) as well as the external (market specific and macroeconomic) factors. Profitability in banking sector is affected by numerous factors which can be internal or external. The present study examines these internal and external factors which are likely to effect the profitablilty of the Indian banks. The sample consists of a panel dataset of 64 commercial banks in India, consisting of 1088 observations over the years from 1998 to 2016. The GMM dynamic panel estimation given by Arellano and Bond has been used. The study revealed that the variables capital adequacy ratio, deposit, age, labour productivity, non-performing asset, inflation and concentration have significant effect on performance measured.

Keywords: banks in India, bank performance, bank productivity, banking management

Procedia PDF Downloads 272
775 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand

Authors: Esma Birisci, Ronald McGarvey

Abstract:

One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.

Keywords: environmental studies, food waste, production planning, uncertain and correlated demand

Procedia PDF Downloads 372
774 Effects of Conversion of Indigenous Forest to Plantation Forest on the Diversity of Macro-Fungi in Kereita Forest, Kikuyu Escarpment, Kenya

Authors: Susan Mwai, Mary Muchane, Peter Wachira, Sheila Okoth, Muchai Muchane, Halima Saado

Abstract:

Tropical forests harbor a wide range of biodiversity and rich macro-fungi diversity compared to the temperate regions in the World. However, biodiversity is facing the threat of extinction following the rate of forest loss taking place before proper study and documentation of macrofungi is achieved. The present study was undertaken to determine the effect of converting indigenous habitat to plantation forest on macrofungi diversity. To achieve the objective of this study, an inventory focusing on macro-fungi diversity was conducted within Kereita block in Kikuyu Escarpment forest which is on the southern side of Aberdare mountain range. The macrofungi diversity was conducted in the indigenous forest and in more than 15 year old Patula plantation forest , during the wet (long rain season, December 2014) and dry (Short rain season, May, 2015). In each forest type, 15 permanent (20m x 20m) sampling plots distributed across three (3) forest blocks were used. Both field and laboratory methods involved recording abundance of fruiting bodies, taxonomic identity of species and analysis of diversity indices and measures in terms of species richness, density and diversity. R statistical program was used to analyze for species diversity and Canoco 4.5 software for species composition. A total number of 76 genera in 28 families and 224 species were encountered in both forest types. The most represented taxa belonged to the Agaricaceae (16%), Polyporaceae (12%), Marasmiaceae, Mycenaceae (7%) families respectively. Most of the recorded macro-fungi were saprophytic, mostly colonizing the litter 38% and wood 34% based substrates, which was followed by soil organic dwelling species (17%). Ecto-mycorrhiza fungi (5%) and parasitic fungi (2%) were the least encountered. The data established that indigenous forests (native ecosystems) hosts a wide range of macrofungi assemblage in terms of density (2.6 individual fruit bodies / m2), species richness (8.3 species / plot) and species diversity (1.49/ plot level) compared to the plantation forest. The Conversion of native forest to plantation forest also interfered with species composition though did not alter species diversity. Seasonality was also shown to significantly affect the diversity of macro-fungi and 61% of the total species being present during the wet season. Based on the present findings, forested ecosystems in Kenya hold diverse macro-fungi community which warrants conservation measures.

Keywords: diversity, Indigenous forest, macro-fungi, plantation forest, season

Procedia PDF Downloads 214
773 Effect of Pollutions on Mangrove Forests of Nayband National Marine Park

Authors: Esmaeil Kouhgardi, Elaheh Shakerdargah

Abstract:

The mangrove ecosystem is a complex of various inter-related elements in the land-sea interface zone which is linked with other natural systems of the coastal region such as corals, sea-grass, coastal fisheries and beach vegetation. The mangrove ecosystem consists of water, muddy soil, trees, shrubs, and their associated flora, fauna and microbes. It is a very productive ecosystem sustaining various forms of life. Its waters are nursery grounds for fish, crustacean, and mollusk and also provide habitat for a wide range of aquatic life, while the land supports a rich and diverse flora and fauna, but pollutions may affect these characteristics. Iran has the lowest share of Persian Gulf pollution among the eight littoral states; environmental experts are still deeply concerned about the serious consequences of the pollution in the oil-rich gulf. Prolongation of critical conditions in the Persian Gulf has endangered its aquatic ecosystem. Water purification equipment, refineries, wastewater emitted by onshore installations, especially petrochemical plans, urban sewage, population density and extensive oil operations of Arab states are factors contaminating the Persian Gulf waters. Population density has been the major cause of pollution and environmental degradation in the Persian Gulf. Persian Gulf is a closed marine environment which is connected to open waterways only from one way. It usually takes between three and four years for the gulf's water to be completely replaced. Therefore, any pollution entering the water will remain there for a relatively long time. Presently, the high temperature and excessive salt level in the water have exposed the marine creatures to extra threats, which mean they have to survive very tough conditions. The natural environment of the Persian Gulf is very rich with good fish grounds, extensive coral reefs and pearl oysters in abundance, but has become increasingly under pressure due to the heavy industrialization and in particular the repeated major oil spillages associated with the various recent wars fought in the region. Pollution may cause the mortality of mangrove forests by effect on root, leaf and soil of the area. Study was showed the high correlation between industrial pollution and mangrove forests health in south of Iran and increase of population, coupled with economic growth, inevitably caused the use of mangrove lands for various purposes such as construction of roads, ports and harbors, industries and urbanization.

Keywords: Mangrove forest, pollution, Persian Gulf, population, environment

Procedia PDF Downloads 399
772 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 121
771 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 191
770 Qualitative and Quantitative Traits of Processed Farmed Fish in N. W. Greece

Authors: Cosmas Nathanailides, Fotini Kakali, Kostas Karipoglou

Abstract:

The filleting yield and the chemical composition of farmed sea bass (Dicentrarchus labrax); rainbow trout (Oncorynchus mykiss) and meagre (Argyrosomus regius) was investigated in farmed fish in NW Greece. The results provide an estimate of the quantity of fish required to produce one kilogram of fillet weight, an estimation which is required for the operational management of fish processing companies. Furthermore in this work, the ratio of feed input required to produce one kilogram of fish fillet (FFCR) is presented for the first time as a useful indicator of the ecological footprint of consuming farmed fish. The lowest lipid content appeared in meagre (1,7%) and the highest in trout (4,91%). The lowest fillet yield and fillet yield feed conversion ratio (FYFCR) was in meagre (FY=42,17%, FFCR=2,48), the best fillet yield (FY=53,8%) and FYFCR (2,10) was exhibited in farmed rainbow trout. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARCHIMEDES III. Investing in knowledge society through the European Social Fund.

Keywords: farmed fish, flesh quality, filleting yield, lipid

Procedia PDF Downloads 309
769 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building

Authors: Abdul Hakim Chikho

Abstract:

Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.

Keywords: earthquake, fundamental mode period, design, building

Procedia PDF Downloads 284
768 Verification of Simulated Accumulated Precipitation

Authors: Nato Kutaladze, George Mikuchadze, Giorgi Sokhadze

Abstract:

Precipitation forecasts are one of the most demanding applications in numerical weather prediction (NWP). Georgia, as the whole Caucasian region, is characterized by very complex topography. The country territory is prone to flash floods and mudflows, quantitative precipitation estimation (QPE) and quantitative precipitation forecast (QPF) at any leading time are very important for Georgia. In this study, advanced research weather forecasting model’s skill in QPF is investigated over Georgia’s territory. We have analyzed several convection parameterization and microphysical scheme combinations for different rainy episodes and heavy rainy phenomena. We estimate errors and biases in accumulated 6 h precipitation using different spatial resolution during model performance verification for 12-hour and 24-hour lead time against corresponding rain gouge observations and satellite data. Various statistical parameters have been calculated for the 8-month comparison period, and some skills of model simulation have been evaluated. Our focus is on the formation and organization of convective precipitation systems in a low-mountain region. Several problems in connection with QPF have been identified for mountain regions, which include the overestimation and underestimation of precipitation on the windward and lee side of the mountains, respectively, and a phase error in the diurnal cycle of precipitation leading to the onset of convective precipitation in model forecasts several hours too early.

Keywords: extremal dependence index, false alarm, numerical weather prediction, quantitative precipitation forecasting

Procedia PDF Downloads 147
767 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution

Procedia PDF Downloads 505
766 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 163
765 The Cartometric-Geographical Analysis of Ivane Javakhishvili 1922: The Map of the Republic of Georgia

Authors: Manana Kvetenadze, Dali Nikolaishvili

Abstract:

The study revealed the territorial changes of Georgia before the Soviet and Post-Soviet periods. This includes the estimation of the country's borders, its administrative-territorial arrangement change as well as the establishment of territorial losses. Georgia’s old and new borders marked on the map are of great interest. The new boundary shows the condition of 1922 year, following the Soviet period. Neither on this map nor in other works Ivane Javakhishvili talks about what he implies in the old borders, though it is evident that this is the Pre-Soviet boundary until 1921 – i.e., before the period when historical Tao, Zaqatala, Lore, Karaia represented the parts of Georgia. According to cartometric-geographical terms, the work presents detailed analysis of Georgia’s borders, along with this the comparison of research results has been carried out: 1) At the boundary line on Soviet topographic maps, the maps of 100,000; 50,000 and 25,000 scales are used; 2) According to Ivane Javakhishvili’s work ('The borders of Georgia in terms of historical and contemporary issues'). During that research, we used multi-disciplined methodology and software. We used Arc GIS for Georeferencing maps, and after that, we compare all post-Soviet Union maps, in order to determine how the borders have changed. During this work, we also use many historical data. The features of the spatial distribution of the territorial administrative units of Georgia, as well as the distribution of administrative-territorial units of the objects depicted on the map, have been established. The results obtained are presented in the forms of thematic maps and diagrams.

Keywords: border, GIS, georgia, historical cartography, old maps

Procedia PDF Downloads 242
764 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks

Authors: Michael Josef Schwerer

Abstract:

Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.

Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy

Procedia PDF Downloads 51
763 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles

Authors: Jafar Mortadha, Imran Qureshi

Abstract:

This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.

Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes

Procedia PDF Downloads 295
762 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics

Authors: Okay Gunes

Abstract:

In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.

Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality

Procedia PDF Downloads 441
761 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler

Authors: Roudane Mohamed

Abstract:

In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.

Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction

Procedia PDF Downloads 218
760 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi

Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu

Abstract:

In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.

Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR

Procedia PDF Downloads 126
759 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners

Authors: Pi-Hsia Hung

Abstract:

The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.

Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity

Procedia PDF Downloads 247
758 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 70
757 Image Processing techniques for Surveillance in Outdoor Environment

Authors: Jayanth C., Anirudh Sai Yetikuri, Kavitha S. N.

Abstract:

This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts.

Keywords: computer vision, pose detection, facial recognition, number plate extraction, machine learning, real-time analysis, OCR, database management

Procedia PDF Downloads 26
756 Comparison of Verb Complementation Patterns in Selected Pakistani and British English Newspaper Social Columns: A Corpus-Based Study

Authors: Zafar Iqbal Bhatti

Abstract:

The present research aims to examine and evaluate the frequencies and practices of verb complementation patterns in English newspaper social columns published in Pakistan and Britain. The research will demonstrate that Pakistani English is a non-native variety of English having its own unique usual and logical characteristics, affected by way of the native languages and the culture, upon syntactic levels, making the variety users aware that any differences from British or American English that are systematic and regular, or another English language, are not even if they are unique, erroneous forms and typical characteristics of several kinds. The objectives are to examine the verb complementation patterns that British and Pakistani social columnists use in relation to their syntactic categories. Secondly, to compare the verb complementation patterns used in Pakistani and British English newspapers social columns. This study will figure out various verb complementation patterns in Pakistani and British English newspaper social columns and their occurrence and distribution. The word classes express different functions of words, such as action, event, or state of being. This research aims to evaluate whether there are any appreciable differences in the verb complementation patterns used in Pakistani and British English newspaper social columns. The results will show the number of varieties of verb complementation patterns in selected English newspapers social columns. This study will fill the gap of previous studies conducted in this field as they only explore a little about the differences between Pakistani and British English newspapers. It will also figure out a variety of languages used in Pakistani and British English journals, as well as regional and cultural values and variations. The researcher will use AntConc software in this study to extract the data for analysis. The researcher will use a concordance tool to identify verb complementation patterns in selected data. Then the researcher will manually categorize them because the same type of adverb can sometimes be used for various purposes. From 1st June 2022 to 30th Sep. 2022, a four-month written corpus of the social columns of PE and BE newspapers will be collected and analyzed. For the analysis of the research questions, 50 social columns will be selected from Pakistani newspapers and 50 from British newspapers. The researcher will collect a representative sample of data from Pakistani and British English newspaper social columns. The researcher will manually analyze the complementation patterns of each verb in each sentence, and then the researcher will determine how frequently each pattern occurs. The researcher will use syntactic characteristics of the verb complementation elements according to the description by Downing and Locke (2006). The researcher will examine all of the verb complementation patterns in the data, and the frequency and distribution of each verb complementation pattern will be evaluated using the software. The researcher will explore every possible verb complementation pattern in Pakistani and British English before calculating the occurrence and abundance of each verb pattern. The researcher will explore every possible verb complementation pattern in Pakistani English before calculating the frequency and distribution of each pattern.

Keywords: verb complementation, syntactic categories, newspaper social columns, corpus

Procedia PDF Downloads 51
755 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals

Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang

Abstract:

Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.

Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation

Procedia PDF Downloads 323
754 Managerial Overconfidence, Payout Policy, and Corporate Governance: Evidence from UK Companies

Authors: Abdullah AlGhazali, Richard Fairchild, Yilmaz Guney

Abstract:

We examine the effect of managerial overconfidence on UK firms’ payout policy for the period 2000 to 2012. The analysis incorporates, in addition to common firm-specific factors, a wide range of corporate governance factors and managerial characteristics that have been documented to affect the relationship between overconfidence and payout policy. Our results are robust to several estimation considerations. The findings show that the influence of overconfident CEOs on the amount of, and the propensity to pay, dividends is significant within the UK context. Specifically, we detect that there is a reduction in dividend payments in firms managed by overconfident managers compared to their non-overconfident counterparts. Moreover, we affirm that cash flows, firm size and profitability are positively correlated, while leverage, firm growth and investment are negatively correlated with the amount of and propensity to pay dividends. Interestingly, we demonstrate that firms with the potential for undervaluation reduce dividend payments. Some of the corporate governance factors are shown to motivate firms to pay more dividends while these factors seem to have no influence on the propensity to pay dividends. The results also show that in general higher overconfidence leads to more share repurchases but the lower total payout. Overall, managerial overconfidence should be considered as an important factor influencing payout policy in addition to other known factors.

Keywords: dividends, repurchases, UK firms, overconfidence, corporate governance, undervaluation

Procedia PDF Downloads 270