Search results for: time series feature extraction
19446 Identification of Coauthors in Scientific Database
Authors: Thiago M. R Dias, Gray F. Moita
Abstract:
The analysis of scientific collaboration networks has contributed significantly to improving the understanding of how does the process of collaboration between researchers and also to understand how the evolution of scientific production of researchers or research groups occurs. However, the identification of collaborations in large scientific databases is not a trivial task given the high computational cost of the methods commonly used. This paper proposes a method for identifying collaboration in large data base of curriculum researchers. The proposed method has low computational cost with satisfactory results, proving to be an interesting alternative for the modeling and characterization of large scientific collaboration networks.Keywords: extraction, data integration, information retrieval, scientific collaboration
Procedia PDF Downloads 39519445 FPGA Implementation of the BB84 Protocol
Authors: Jaouadi Ikram, Machhout Mohsen
Abstract:
The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.Keywords: QKD, BB84, protocol, cryptography, FPGA, key, security, communication
Procedia PDF Downloads 18119444 Simulation Analysis of Wavelength/Time/Space Codes Using CSRZ and DPSK-RZ Formats for Fiber-Optic CDMA Systems
Authors: Jaswinder Singh
Abstract:
In this paper, comparative analysis is carried out to study the performance of wavelength/time/space optical CDMA codes using two well-known formats; those are CSRZ and DPSK-RZ using RSoft’s OptSIM. The analysis is carried out under the real-like scenario considering the presence of various non-linear effects such as XPM, SPM, SRS, SBS and FWM. Fiber dispersion and the multiple access interference are also considered. The codes used in this analysis are 3-D wavelength/time/space codes. These are converted into 2-D wavelength-time codes so that their requirement of space couplers and fiber ribbons is eliminated. Under the conditions simulated, this is found that CSRZ performs better than DPSK-RZ for fiber-optic CDMA applications.Keywords: Optical CDMA, Multiple access interference (MAI), CSRZ, DPSK-RZ
Procedia PDF Downloads 64419443 Real-Time Control of Grid-Connected Inverter Based on labVIEW
Authors: L. Benbaouche, H. E. , F. Krim
Abstract:
In this paper we propose real-time control of grid-connected single phase inverter, which is flexible and efficient. The first step is devoted to the study and design of the controller through simulation, conducted by the LabVIEW software on the computer 'host'. The second step is running the application from PXI 'target'. LabVIEW software, combined with NI-DAQmx, gives the tools to easily build applications using the digital to analog converter to generate the PWM control signals. Experimental results show that the effectiveness of LabVIEW software applied to power electronics.Keywords: real-time control, labview, inverter, PWM
Procedia PDF Downloads 50719442 Polymerization of Epsilon-Caprolactone Using Lipase Enzyme for Medical Applications
Authors: Sukanya Devi Ramachandran, Vaishnavi Muralidharan, Kavya Chandrasekaran
Abstract:
Polycaprolactone is polymer belonging to the polyester family that has noticeable characteristics of biodegradability and biocompatibility which is essential for medical applications. Polycaprolactone is produced by the ring opening polymerization of the monomer epsilon-Caprolactone (ε-CL) which is a closed ester, comprising of seven-membered ring. This process is normally catalysed by metallic components such as stannous octoate. It is difficult to remove the catalysts after the reaction, and they are also toxic to the human body. An alternate route of using enzymes as catalysts is being employed to reduce the toxicity. Lipase enzyme is a subclass of esterase that can easily attack the ester bonds of ε-CL. This research paper throws light on the extraction of lipase from germinating sunflower seeds and the activity of the biocatalyst in the polymerization of ε-CL. Germinating Sunflower seeds were crushed with fine sand in phosphate buffer of pH 6.5 into a fine paste which was centrifuged at 5000rpm for 10 minutes. The clear solution of the enzyme was tested for activity at various pH ranging from 5 to 7 and temperature ranging from 40oC to 70oC. The enzyme was active at pH6.0 and at 600C temperature. Polymerization of ε-CL was done using toluene as solvent with the catalysis of lipase enzyme, after which chloroform was added to terminate the reaction and was washed in cold methanol to obtain the polymer. The polymerization was done by varying the time from 72 hours to 6 days and tested for the molecular weight and the conversion of the monomer. The molecular weight obtained at 6 days is comparably higher. This method will be very effective, economical and eco-friendly to produce as the enzyme used can be regenerated as such at the end of the reaction and can be reused. The obtained polymers can be used for drug delivery and other medical applications.Keywords: lipase, monomer, polycaprolactone, polymerization
Procedia PDF Downloads 29519441 Anomaly Detection in Financial Markets Using Tucker Decomposition
Authors: Salma Krafessi
Abstract:
The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models
Procedia PDF Downloads 6819440 Meitu and the Case of the AI Art Movement
Authors: Taliah Foudah, Sana Masri, Jana Al Ghamdi, Rimaz Alzaaqi
Abstract:
This research project explores the creative works of the app Metui, which allows consumers to edit their photos and use the new and popular AI feature, which turns any photo into a cartoon-like animated image with beautified enhancements. Studying this AI app demonstrates the significance of the ability in which AI can develop intricate designs which verily replicate the human mind. Our goal was to investigate the Metui app by asking our audience certain questions about its functionality and their personal feelings about its credibility as well as their beliefs as to how this app will add to the future of the AI generation, both positively and negatively. Their responses were further explored by analyzing the questions and responses thoroughly and calculating the results through pie charts. Overall, it was concluded that the Metui app is a powerful step forward for AI by replicating the intelligence of humans and its creativity to either benefit society or do the opposite.Keywords: AI Art, Meitu, application, photo editing
Procedia PDF Downloads 6619439 Bivariate Time-to-Event Analysis with Copula-Based Cox Regression
Authors: Duhania O. Mahara, Santi W. Purnami, Aulia N. Fitria, Merissa N. Z. Wirontono, Revina Musfiroh, Shofi Andari, Sagiran Sagiran, Estiana Khoirunnisa, Wahyudi Widada
Abstract:
For assessing interventions in numerous disease areas, the use of multiple time-to-event outcomes is common. An individual might experience two different events called bivariate time-to-event data, the events may be correlated because it come from the same subject and also influenced by individual characteristics. The bivariate time-to-event case can be applied by copula-based bivariate Cox survival model, using the Clayton and Frank copulas to analyze the dependence structure of each event and also the covariates effect. By applying this method to modeling the recurrent event infection of hemodialysis insertion on chronic kidney disease (CKD) patients, from the AIC and BIC values we find that the Clayton copula model was the best model with Kendall’s Tau is (τ=0,02).Keywords: bivariate cox, bivariate event, copula function, survival copula
Procedia PDF Downloads 8019438 Impact of Regulation on Trading in Financial Derivatives in Europe
Authors: H. Florianová, J. Nešleha
Abstract:
Financial derivatives are considered to be risky investment instruments which could possibly bring another financial crisis. As prevention, European Union and its member states have released new legal acts adjusting this area of law in recent years. There have been several cases in history of capital markets worldwide where it was shown that legislature may affect behavior of subjects on capital markets. In our paper we analyze main events on selected European stock exchanges in order to apply them on three chosen markets - Czech capital market represented by Prague Stock Exchange, German capital market represented by Deutsche Börse and Polish capital market represented by Warsaw Stock Exchange. We follow time series of development of the sum of listed derivatives on these three stock exchanges in order to evaluate popularity of those exchanges. Afterwards we compare newly listed derivatives in relation to the speed of development of these exchanges. We also make a comparison between trends in derivatives and shares development. We explain how a legal regulation may affect situation on capital markets. If the regulation is too strict, potential investors or traders are not willing to undertake it and move to other markets. On the other hand, if the regulation is too vague, trading scandals occur and the market is not reliable from the prospect of potential investors or issuers. We see that making the regulation stricter usually discourages subjects to stay on the market immediately although making the regulation vaguer to interest more subjects is usually much slower process.Keywords: capital markets, financial derivatives, investors' behavior, regulation
Procedia PDF Downloads 26819437 Application of a SubIval Numerical Solver for Fractional Circuits
Authors: Marcin Sowa
Abstract:
The paper discusses the subinterval-based numerical method for fractional derivative computations. It is now referred to by its acronym – SubIval. The basis of the method is briefly recalled. The ability of the method to be applied in time stepping solvers is discussed. The possibility of implementing a time step size adaptive solver is also mentioned. The solver is tested on a transient circuit example. In order to display the accuracy of the solver – the results have been compared with those obtained by means of a semi-analytical method called gcdAlpha. The time step size adaptive solver applying SubIval has been proven to be very accurate as the results are very close to the referential solution. The solver is currently able to solve FDE (fractional differential equations) with various derivative orders for each equation and any type of source time functions.Keywords: numerical method, SubIval, fractional calculus, numerical solver, circuit analysis
Procedia PDF Downloads 20419436 An Assessment of Inland Transport Operator's Competitiveness in Phnom Penh, Cambodia
Authors: Savin Phoeun
Abstract:
Long time civil war, economic, infrastructure, social, and political structure were destroyed and everything starts from zero. Transport and communication are the key feature of the national economic growth, especially inland transport and other mode take a complementary role which supported by government and international organization both direct and indirect to private sector and small and medium size enterprises. The objectives of this study are to study the general characteristics, capacity and competitive KPIs of Cambodian Inland Transport Operators. Questionnaire and interview were formed from capacity and competitiveness key performance indicators to take apart in survey to Inland Transport Companies in Phnom Penh capital city of Cambodia. And descriptive statistics was applied to identify the data. The result of this study divided into three distinct sectors: 1). Management ability of transport operators – capital management, financial and qualification are in similar level which can compete between local competitors (moderated level). 2). Ability in operation: customer service providing is better but seemed in high cost operation because mostly they are in family size. 3). Local Cambodian Inland Transport Service Providers are able to compete with each other because they are in similar operation level while Thai competitors mostly higher than. The suggestion and recommendation from the result that inland transport companies should access to new technology, improve strategic management, build partnership (join/corporate) to be bigger size of capital and company in order to attract truthfulness from customers and customize the services to satisfy. Inland Service Providers should change characteristic from only cost competitive to cost saving and service enhancement.Keywords: assessment, competitiveness, inland transport, operator
Procedia PDF Downloads 26119435 Advanced Technology for Natural Gas Liquids (NGL) Recovery Using Residue Gas Split
Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel
Abstract:
The competitive scenario of the oil and gas market is a challenge for today’s plant designers to achieve designs that meet client expectations with shrinking budgets, safety requirements, and operating flexibility. Natural Gas Liquids have three main industrial uses. They can be used as fuels, or as petrochemical feedstock or as refinery blends that can be further processed and sold as straight run cuts, such as naphtha, kerosene and gas oil. NGL extraction is not a chemical reaction. It involves the separation of heavier hydrocarbons from the main gas stream through pressure as temperature reduction, which depending upon the degree of NGL extraction may involve cryogenic process. Previous technologies i.e. short cycle dry desiccant absorption, Joule-Thompson or Low temperature refrigeration, lean oil absorption have been giving results of only 40 to 45% ethane recoveries, which were unsatisfying depending upon the current scenario of down turn market. Here new technology has been suggested for boosting up the recoveries of ethane+ up to 95% and up to 99% for propane+ components. Cryogenic plants provide reboiling to demethanizers by using part of inlet feed gas, or inlet feed split. If the two stream temperatures are not similar, there is lost work in the mixing operation unless the designer has access to some proprietary design. The concept introduced in this process consists of reboiling the demethanizer with the residue gas, or residue gas split. The innovation of this process is that it does not use the typical inlet gas feed split type of flow arrangement to reboil the demethanizer or deethanizer column, but instead uses an open heat pump scheme to that effect. The residue gas compressor provides the heat pump effect. The heat pump stream is then further cooled and entered in the top section of the column as a cold reflux. Because of the nature of this design, this process offers the opportunity to operate at full ethane rejection or recovery. The scheme is also very adaptable to revamp existing facilities. This advancement can be proven not only in enhancing the results but also provides operational flexibility, optimize heat exchange, introduces equipment cost reduction, opens a future for the innovative designs while keeping execution costs low.Keywords: deethanizer, demethanizer, residue gas, NGL
Procedia PDF Downloads 26419434 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 10419433 A New Second Tier Screening for Congenital Adrenal Hyperplasia Utilizing One Dried Blood Spot
Authors: Engy Shokry, Giancarlo La Marca, Maria Luisa Della Bona
Abstract:
Newborn screening for Congenital Adrenal Hyperplasia (CAH) relies on quantification of 17α-hydroxyprogesterone using enzyme immunoassays. These assays, in spite of being rapid, readily available and easy to perform, its reliability was found questionable due to lack of selectivity and specificity resulting in large number of false-positives, consequently family anxiety and associated hospitalization costs. To improve specificity of conventional 17α-hydroxyprogesterone screening which may experience false transient elevation in preterm, low birth weight or acutely ill neonates, steroid profiling by LC-MS/MS as a second-tier test was implemented. Unlike the previously applied LC-MS/MS methods, with the disadvantage of requiring a relatively high number of blood drops. Since newborn screening tests are increasing, it is necessary to minimize the sample volume requirement to make the maximum use of blood samples collected on filter paper. The proposed new method requires just one 3.2 mm dried blood spot (DBS) punch. Extraction was done using methanol: water: formic acid (90:10:0.1, v/v/v) containing deuterium labelled internal standards. Extracts were evaporated and reconstituted in 10 % acetone in water. Column switching strategy for on-line sample clean-up was applied to improve the chromatographic run. The first separative step retained the investigated steroids and passed through the majority of high molecular weight impurities. After the valve switching, the investigated steroids are back flushed from the POROS® column onto the analytical column and separated using gradient elution. Found quantitation limits were 5, 10 and 50 nmol/L for 17α-hydroxyprogesterone, androstenedione and cortisol respectively with mean recoveries of between 98.31-103.24 % and intra-/ inter-assay CV% < 10 % except at LLOQ. The method was validated using standard addition calibration and isotope dilution strategies. Reference ranges were determined by analysing samples from 896 infants of various ages at the time of sample collection. The method was also applied on patients with confirmed CAH. Our method represents an attractive combination of low sample volume requirement, minimal sample preparation time without derivatization and quick chromatography (5 min). The three steroid profile and the concentration ratios (17OHP + androstenedione/cortisol) allowed better screening outcomes of CAH reducing false positives, associated costs and anxiety.Keywords: congenital adrenal hyperplasia (CAH), 17α-hydroxyprogesterone, androstenedione, cortisol, LC-MS/MS
Procedia PDF Downloads 43719432 Old Swimmers Tire Quickly: The Effect of Time on Quality of Thawed versus Washed Sperm
Authors: Emily Hamilton, Adiel Kahana, Ron Hauser, Shimi Barda
Abstract:
BACKGROUND: In the male fertility and sperm bank unit of Tel Aviv Sourasky medical center, women are treated with intrauterine insemination (IUI) using washed sperm from their partner or thawed sperm from a selected donor. In most cases, the women perform the IUI treatment in Sourasky, but sometimes they ask to undergo the insemination procedure in another clinic with their own fertility doctor. In these cases, the sperm sample is prepared at the Sourasky lab and the patient is inseminated after arriving to her doctor. Our laboratory has previously found that time negatively affects several parameters of thawed sperm, and we estimate that it has more severe and significant effect than on washed sperm. AIM: To examine the effect of time on the quality of washed sperm versus thawed sperm. METHODS: Sperm samples were collected from men referred for semen analysis. Each ejaculate was allowed to liquefy for at least 20 min at 37°C and analyzed for sperm motility and vitality percentage and DNA fragmentation index (Time 0). Subsequently, 1ml of the sample was divided into two parts, 1st part was washed only and the 2nd part was washed, frozen and thawed. Time 1 analysis occurred immediately after sperm washing or thawing. Time 2 analysis occurred 75 minutes after time 1. Statistical analysis was performed using Student t-test. P values<0.05 were considered significant. RESULTS: Preliminary data showed that time had a greater impact on the average percentages of sperm motility and vitality in thawed compared to washed sperm samples (26%±10% vs. 21%±10% and 21%±9% vs. 9%±10%, respectively). An additional trend towards increased average DNA fragmentation percentage in thawed samples compared to washed samples was observed (46%±18% vs. 25%±24%). CONCLUSION: Time negatively effects sperm quality. The effect is greater in thawed samples compared to fresh samples.Keywords: ART, male fertility, sperm cryopreservation, sperm quality
Procedia PDF Downloads 19219431 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential
Authors: Rasheed Amao Busari, Ahmed Ibrahim
Abstract:
The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit
Procedia PDF Downloads 7419430 Assessment of the Egyptian Agricultural Foreign Trade with Common Market for Eastern and Southern Africa Countries
Authors: Doaa H. I. Mahmoud, El-Said M. Elsharkawy, Saad Z. Soliman, Soher E. Mustfa
Abstract:
The opening of new promising foreign markets is one of the objectives of Egypt’s foreign trade policies, especially for agricultural exports. This study aims at the examination of the commodity structure of the Egyptian agricultural imports and exports with the COMESA countries. In addition, estimation of the surplus/deficit of the Egyptian commodities and agricultural balance with these countries is made. Time series data covering the period 2004-2016 is used. Estimation of the growth function along with the derivation of the annual growth rates of the study’s variables is made. Some of the results of the study period display the following: (1) The average total Egyptian exports to the COMESA (Common Market for Eastern and Southern Africa) countries is estimated at 1,491 million dollars, with an annual growth rate of 14.4% (214.7 million dollars). (2) The average annual Egyptian agricultural exports to these economies is estimated at 555 million dollars, with an annual growth rate of 19.4% (107.7 million dollars). (3) The average annual value of agricultural imports from the COMESA countries is set at 289 Million Dollars, with an annual growth rate of 14.4% (41.6 million dollars). (4) The study shows that there is a continuous surplus in the agricultural balance with these economies, whilst having a deficit in the raw-materials agricultural balance, as well as the balance of input requirements with these countries.Keywords: COMESA, Egypt, growth rates, trade balance
Procedia PDF Downloads 20819429 Upon One Smoothing Problem in Project Management
Authors: Dimitri Golenko-Ginzburg
Abstract:
A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate
Procedia PDF Downloads 30119428 Informal Land Subdivision and Its Implications for Infrastructural Development in Kano Metropolis, Nigeria
Authors: A. A. Yakub, Omavudu Ikogho
Abstract:
Land subdivision in most peri-urban areas of Kano metropolis is the entrenched prerogative of ‘KAFADA’ a group of informal plot partitioners who oversee the demarcation of mainly previous farmland into residential plots popularly called 'awon igiya' for those in need. With time these areas are engulfed in the milieu of the rapidly expanding urban landscape and form clusters of poorly planned settlements with tendencies to become future slums. This paper studies the practice of informal land subdivision in Kano metropolis with emphasis on the practitioners, the institutional framework, and the demand and supply scenario that sustains this trend as well as the extent of infrastructural development in these areas. Using three selected informally planned settlements as case-studies, a series of interviews and questionnaires are administered to 'KAFADA,' residents and the state land officers to generate data in these areas. Another set of data was similarly generated in three government subdivided residential layouts, and both sets analysed comparatively. The findings identify varying levels of infrastructural deficits in the informal communities compared to the planned neighbourhoods which are seen to be as a result of the absence of government participation and an informal subdivision process which did not provide for proper planning standards. This study recommends that the regulatory agencies concerned register and partner with KAFADA to ensure that minimal planning standards are maintained in future settlements.Keywords: peri-urban, informal land markets, land subdivision, infrastructure
Procedia PDF Downloads 28219427 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints
Authors: Zongjie Wang, Zhizhong Guo
Abstract:
While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.Keywords: optimal power flow, time period, security, economy
Procedia PDF Downloads 44819426 Application of Model Free Adaptive Control in Main Steam Temperature System of Thermal Power Plant
Authors: Khaing Yadana Swe, Lillie Dewan
Abstract:
At present, the cascade PID control is widely used to control the super-heating temperature (main steam temperature). As the main steam temperature has the characteristics of large inertia, large time-delay, and time varying, etc., conventional PID control strategy can not achieve good control performance. In order to overcome the bad performance and deficiencies of main steam temperature control system, Model Free Adaptive Control (MFAC) P cascade control system is proposed in this paper. By substituting MFAC in PID of the main control loop of the main steam temperature control, it can overcome time delays, non-linearity, disturbance and time variation.Keywords: model-free adaptive control, cascade control, adaptive control, PID
Procedia PDF Downloads 60019425 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals
Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota
Abstract:
An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.Keywords: emergency nurses, nursing care, workload, work sampling
Procedia PDF Downloads 33219424 Comparative Study on Inhibiting Factors of Cost and Time Control in Nigerian Construction Practice
Authors: S. Abdulkadir, I. Y. Moh’d, S. U. Kunya, U. Nuruddeen
Abstract:
The basis of any contract formation between the client and contractor is the budgeted cost and the estimated duration of projects. These variables are paramount important to project's sponsor in a construction projects and in assessing the success or viability of construction projects. Despite the availability of various techniques of cost and time control, many projects failed to achieve their initial estimated cost and time. The paper evaluate the inhibiting factors of cost and time control in Nigerian construction practice and comparing the result with the United Kingdom practice as identified by one researcher. The populations of the study are construction professionals within Bauchi and Gombe state, Nigeria, a judgmental sampling employed in determining the size of respondents. Descriptive statistics used in analyzing the data in SPSS. Design change, project fraud and corruption, financing and payment of completed work found to be common among the top five inhibiting factors of cost and time control in the study area. Furthermore, the result had shown some comprising with slight contrast as in the case of United Kingdom practice. Study recommend the adaptation of mitigation measures developed in the UK prior to assessing its effectiveness and so also developing a mitigating measure for other top factors that are not within the one developed in United Kingdom practice. Also, it recommends a wider assessing comparison on the modify inhibiting factors of cost and time control as revealed by the study to cover almost all part of Nigeria.Keywords: comparison, cost, inhibiting factor, United Kingdom, time
Procedia PDF Downloads 43919423 Bacterial Causes of Cerebral Abscess and Impact on Long Term Patient Outcomes
Authors: Umar Rehman, Holly Roy, K. T. Tsang, D. S. Jeyaretna, W Singleton, B. Fisher, P. A. Glew, J. Greig, Peter C. Whitfield
Abstract:
Introduction: A brain abscess is a life-threatening condition, carrying significant mortality. It requires rapid identification and treatment. Management involves a combination of antibiotics and surgery. The aim of the current study was to identify common bacteria responsible for cerebral abscesses as well as the long term functional and neurological outcomes of patients following treatment in a retrospective series at a single UK neurosurgical centre. Methodology: We analysed patients that had received a diagnosis of 'cerebral abscess' or 'subdural empyema' between June 2002 and June 2018. This was done in the form of a retrospective review. The search resulted in a total of 180 patients; with 37 patients being excluded (spinal abscess, below 18 or non-abscess related admissions). Data were collected from medical case notes including information about demographics, comorbidities, immunosuppression, presentation, size/location of lesions, pathogens, treatment, and outcomes. Results: In total, we analysed 143 patients between the ages of 18-90. Focal neurological deficit and headaches were seen in 84% and 68% of patients respectively. 108 positive brain cultures were seen; with the largest proportion, 59.2% being gram-positive cocci, with strep intermedius being the most common pathogen identified in 13.9% of patients. Of the patients with positive blood cultures (n=11), 72.7% showed the same organism both in the blood and on the brain cultures. Long term outcomes (n=72) revealed that 48% of patients seizure-free without requiring anti-epileptics, 51.3% of patients had full recovery of their neurological symptoms. There was a mortality rate of 13.9% in the series. Conclusion: In conclusion, the largest bacterial cause of abscess within our population was due to gram-positive cocci. The majority of the patient demonstrated full neurological recovery with close to half of patients not requiring anti-epileptics following discharge.Keywords: bacteria, cerebral abscess, long term outcome, neurological deficit
Procedia PDF Downloads 11619422 Analysis of Kinetin Supramolecular Complex with Glytsirrizinic Acid and Based by Mass-Spectrometry Method
Authors: Bakhtishod Matmuratov, Sakhiba Madraximova, Rakhmat Esanov, Alimjan Matchanov
Abstract:
Studies have been performed to obtain complexes of glycyrrhizic acid and kinetins in a 2:1 ratio. The complex of glycyrrhizic acid and kinetins in a 2:1 ratio was considered evidence of the formation of a molecular complex by determining the molecular masses using chromato-mass spectroscopy and analyzing the IR spectra.Keywords: monoammonium salt of glycyrrhizic acid, glycyrrhizic acid, supramolecular complex, isomolar series, IR spectroscopy
Procedia PDF Downloads 17519421 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 10719420 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8519419 Ionometallurgy for Recycling Silver in Silicon Solar Panel
Authors: Emmanuel Billy
Abstract:
This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver
Procedia PDF Downloads 24619418 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis
Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi
Abstract:
Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.Keywords: Gait analysis, kinematic, motor impairment, inherent feature
Procedia PDF Downloads 35419417 Physico-Chemical Characterization of the Essential Oil of Daucus carota
Authors: Nassima Behidj-Benyounes, Thoraya Dahmene, Khaled Benyounes Nadjiba Chebouti1and F/Zohra Bissaad
Abstract:
Essential oils have a significant antimicrobial activity. These oils can successfully replace the antibiotics. So, the microorganisms show their inefficiencies resistant for the antibiotics. For this reason, we study the physicochemical analysis and antimicrobial activity of the essential oil of Daucus carota. The extraction is done by steam distillation of water which brought us a very significant return of 4.65%. The analysis of the essential oil is performed by GC/MS and has allowed us to identify 32 compounds in the oil of D. carota flowering tops of Bouira. Three of which are in the majority are the α-pinene (22.3%), the carotol (21.7%) and the limonene (15.8%).Keywords: Daucus carota, essential oil, α-pinene, carotol, limonene
Procedia PDF Downloads 387