Search results for: real GDP
1672 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach
Authors: Ju-Hong Lee, Yi-Lin Shieh
Abstract:
Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm
Procedia PDF Downloads 2251671 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 2471670 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 2441669 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect
Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar
Abstract:
In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect
Procedia PDF Downloads 2711668 Dual Set Point Governor Control Structure with Common Optimum Temporary Droop Settings for both Islanded and Grid Connected Modes
Authors: Deepen Sharma, Eugene F. Hill
Abstract:
For nearly 100 years, hydro-turbine governors have operated with only a frequency set point. This natural governor action means that the governor responds with changing megawatt output to disturbances in system frequency. More and more, power system managers are demanding that governors operate with constant megawatt output. One way of doing this is to introduce a second set point in the control structure called a power set point. The control structure investigated and analyzed in this paper is unique in the way that it utilizes a power reference set point in addition to the conventional frequency reference set point. An optimum set of temporary droop parameters derived based on the turbine-generator inertia constant and the penstock water start time for stable islanded operation are shown to be also equally applicable for a satisfactory rate of generator loading during its grid connected mode. A theoretical development shows why this is the case. The performance of the control structure has been investigated and established based on the simulation study made in MATLAB/Simulink as well as through testing the real time controller performance on a 15 MW Kaplan Turbine and generator. Recordings have been made using the labVIEW data acquisition platform. The hydro-turbine governor control structure and its performance investigated in this paper thus eliminates the need to have a separate set of temporary droop parameters, one valid for islanded mode and the other for interconnected operations mode.Keywords: frequency set point, hydro governor, interconnected operation, isolated operation, power set point
Procedia PDF Downloads 3671667 The Use of Industrial Ecology Principles in the Production of Solar Cells and Solar Modules
Authors: Julius Denafas, Irina Kliopova, Gintaras Denafas
Abstract:
Three opportunities for implementation of industrial ecology principles in the real industrial production of c-Si solar cells and modules are presented in this study. It includes: material flow dematerialisation, product modification and industrial symbiosis. Firstly, it is shown how the collaboration between R&D institutes and industry helps to achieve significant reduction of material consumption by a) refuse from phosphor silicate glass cleaning process and b) shortening of SiNx coating production step. This work was performed in the frame of Eco-Solar project, where Soli Tek R&D is collaborating together with the partners from ISC-Konstanz institute. Secondly, it was shown how the modification of solar module design can reduce the CO2 footprint for this product and enhance waste prevention. It was achieved by implementing a frameless glass/glass solar module design instead of glass/backsheet with aluminium frame. Such a design change is possible without purchasing new equipment and without loss of main product properties like efficiency, rigidity and longevity. Thirdly, industrial symbiosis in the solar cell production is possible in such case when manufacturing waste (silicon wafer and solar cell breakage) are collected, sorted and supplied as raw-materials to other companies involved in the production chain of c-Si solar cells. The obtained results showed that solar cells produced from recycled silicon can have a comparable electrical parameters like produced from standard, commercial silicon wafers. The above mentioned work was performed at solar cell producer Soli Tek R&D in the frame of H2020 projects CABRISS and Eco-Solar.Keywords: solar cells and solar modules, manufacturing, waste prevention, recycling
Procedia PDF Downloads 2131666 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific
Authors: Giuseppe Timperio, Robert De Souza
Abstract:
The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience
Procedia PDF Downloads 1761665 A Pilot Study on Integration of Simulation in the Nursing Educational Program: Hybrid Simulation
Authors: Vesile Unver, Tulay Basak, Hatice Ayhan, Ilknur Cinar, Emine Iyigun, Nuran Tosun
Abstract:
The aim of this study is to analyze the effects of the hybrid simulation. In this simulation, types standardized patients and task trainers are employed simultaneously. For instance, in order to teach the IV activities standardized patients and IV arm models are used. The study was designed as a quasi-experimental research. Before the implementation an ethical permission was taken from the local ethical commission and administrative permission was granted from the nursing school. The universe of the study included second-grade nursing students (n=77). The participants were selected through simple random sample technique and total of 39 nursing students were included. The views of the participants were collected through a feedback form with 12 items. The form was developed by the authors and “Patient intervention self-confidence/competence scale”. Participants reported advantages of the hybrid simulation practice. Such advantages include the following: developing connections between the simulated scenario and real life situations in clinical conditions; recognition of the need for learning more about clinical practice. They all stated that the implementation was very useful for them. They also added three major gains; improvement of critical thinking skills (94.7%) and the skill of making decisions (97.3%); and feeling as if a nurse (92.1%). In regard to the mean scores of the participants in the patient intervention self-confidence/competence scale, it was found that the total mean score for the scale was 75.23±7.76. The findings obtained in the study suggest that the hybrid simulation has positive effects on the integration of theoretical and practical activities before clinical activities for the nursing students.Keywords: hybrid simulation, clinical practice, nursing education, nursing students
Procedia PDF Downloads 2931664 Software-Defined Architecture and Front-End Optimization for DO-178B Compliant Distance Measuring Equipment
Authors: Farzan Farhangian, Behnam Shakibafar, Bobda Cedric, Rene Jr. Landry
Abstract:
Among the air navigation technologies, many of them are capable of increasing aviation sustainability as well as accuracy improvement in Alternative Positioning, Navigation, and Timing (APNT), especially avionics Distance Measuring Equipment (DME), Very high-frequency Omni-directional Range (VOR), etc. The integration of these air navigation solutions could make a robust and efficient accuracy in air mobility, air traffic management and autonomous operations. Designing a proper RF front-end, power amplifier and software-defined transponder could pave the way for reaching an optimized avionics navigation solution. In this article, the possibility of reaching an optimum front-end to be used with single low-cost Software-Defined Radio (SDR) has been investigated in order to reach a software-defined DME architecture. Our software-defined approach uses the firmware possibilities to design a real-time software architecture compatible with a Multi Input Multi Output (MIMO) BladeRF to estimate an accurate time delay between a Transmission (Tx) and the reception (Rx) channels using the synchronous scheduled communication. We could design a novel power amplifier for the transmission channel of the DME to pass the minimum transmission power. This article also investigates designing proper pair pulses based on the DO-178B avionics standard. Various guidelines have been tested, and the possibility of passing the certification process for each standard term has been analyzed. Finally, the performance of the DME was tested in the laboratory environment using an IFR6000, which showed that the proposed architecture reached an accuracy of less than 0.23 Nautical mile (Nmi) with 98% probability.Keywords: avionics, DME, software defined radio, navigation
Procedia PDF Downloads 791663 Strategic Management Education: A Driver of Architectural Career Development in a Changing Environment
Authors: Rigved Chandrashekhar Nimkhedkar, Rajat Agrawal, Vinay Sharma
Abstract:
Architects need help with a demand for an expanded skill set to effectively navigate a landscape of evolving opportunities and challenges in the dynamic realm of the architectural profession. This literature and survey-based study investigates the reasons behind architects’ choices of careers, as well as the effects of the evolving architectural scenario. The traditional role of architects in construction projects evolves as they explore diverse career motivations, face financial constraints due to an oversupply of professionals, and experience specialisation and upskilling trends. Architects inherently derive numerous value chains as more and more disciplines have been introduced into the design-construction-operation supply chain. This insight emphasizes the importance of integrating management and entrepreneurial education into architectural education rather than keeping them separate entities. The study reveals the complex nature of the entrepreneurially challenging architectural profession, including cash flow management, market competition, environmental sustainability, and innovation opportunities. Loyal to their professional identity, architects express dissatisfaction while envisioning a future in which they play a more significant role in shaping reputable brands and contributing to education. The study emphasizes the importance of dovetailing management and entrepreneurial education in architecture education in preparing graduates for the industry’s changing nature, emphasising the need for real-world skills. This research contributes insights into the architectural profession’s transformative trajectory, emphasising adaptability, upskilling, and educational enhancements as critical success factors.Keywords: architects, career path, education, management, specialisation
Procedia PDF Downloads 661662 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach
Authors: M. Taheri Tehrani, H. Ajorloo
Abstract:
In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems
Procedia PDF Downloads 5181661 Revisiting Ecotourism Development Strategy of Cuc Phuong National Park in Vietnam: Considering Residents’ Perception and Attitudes
Authors: Bui Duc Sinh
Abstract:
Ecotourism in national parks seemed to be one of the options in the conservation of the natural resources and to improve the living condition of local communities. However, ecotourism development will be useless if it lacks the perception and support of local communities and appropriate ecotourism strategies. The aims of this study were to measure residents’ perception and satisfaction towards ecotourism impacts and their attitudes for ecotourism development in Cuc Phuong National Park; to assess the current ecotourism strategies based on ecotourism criteria and then to provide recommendations on ecotourism development strategies. The primary data were collected through personal observations, in-depth interviews with residents and national park staffs, and from surveys on households in all of the five communes in the Cuc Phuong National Park. The results depicted that local communities were aware of ecotourism impacts and had positive attitudes toward ecotourism development, and were satisfied of ecotourism development. However, higher perception rate was found on specific groups such as the young, the high income and educated, and those with jobs related to ecotourism. The study revealed the issues of concerns about the current ecotourism development strategies in Cuc Phuong National Park. The major hindrances for ecotourism development were lack of local participation and unattractive ecotourism services. It was also suggested that Cuc Phuong National Park should use ecotourism criteria to implement ecotourism activities sustainably and to harmonize the sharing of benefits amongst the stakeholders. The approaches proposed were to: create local employment through reengineering, improve the ecotourism quality, appropriate tourism benefits to the stakeholders, and carry out education and training programs. Furthermore, the results of the study helped tour operators and tourism promoters aware the real concerns, issues on current ecotourism activities in Cuc Phuong National Park.Keywords: ecotourism, ecotourism impact, local community, national park
Procedia PDF Downloads 3391660 Learners’ Violent Behaviour and Drug Abuse as Major Causes of Tobephobia in Schools
Authors: Prakash Singh
Abstract:
Many schools throughout the world are facing constant pressure to cope with the violence and drug abuse of learners who show little or no respect for acceptable and desirable social norms. These delinquent learners tend to harbour feelings of being beyond reproach because they strongly believe that it is well within their rights to engage in violent and destructive behaviour. Knives, guns, and other weapons appear to be more readily used by them on the school premises than before. It is known that learners smoke, drink alcohol, and use drugs during school hours, hence, their ability to concentrate, work, and learn, is affected. They become violent and display disruptive behaviour in their classrooms as well as on the school premises, and this atrocious behaviour makes it possible for drug dealers and gangsters to gain access onto the school premises. The primary purpose of this exploratory quantitative study was therefore to establish how tobephobia (TBP), caused by school violence and drug abuse, affects teaching and learning in schools. The findings of this study affirmed that poor discipline resulted in producing poor quality education. Most of the teachers in this study agreed that educating learners who consumed alcohol and other drugs on the school premises resulted in them suffering from TBP. These learners are frequently abusive and disrespectful, and resort to violence to seek attention. As a result, teachers feel extremely demotivated and suffer from high levels of anxiety and stress. The word TBP will surely be regarded as a blessing by many teachers throughout the world because finally, there is a word that will make people sit up and listen to their problems that cause real fear and anxiety in schools.Keywords: aims and objectives of quality education, debilitating effects of tobephobia, fear of failure associated with education, learners' violent behaviour and drug abuse
Procedia PDF Downloads 2781659 [Keynote Talk]: Uptake of Co(II) Ions from Aqueous Solutions by Low-Cost Biopolymers and Their Hybrid
Authors: Kateryna Zhdanova, Evelyn Szeinbaum, Michelle Lo, Yeonjae Jo, Abel E. Navarro
Abstract:
Alginate hydrogel beads (AB), spent peppermint leaf (PM), and a hybrid adsorbent of these two materials (ABPM) were studied as potential biosorbents of Cobalt (II) ions from aqueous solutions. Cobalt ion is a commonly underestimated pollutant that is responsible for several health problems. Discontinuous batch experiments were conducted at room temperature to evaluate the effect of solution acidity, mass of adsorbent on the adsorption of Co(II) ions. The interfering effect of salinity, the presence of surfactants, an organic dye, and Pb(II) ions were also studied to resemble the application of these adsorbents in real wastewater. Equilibrium results indicate that Co(II) uptake is maximized at pH values higher than 5, with adsorbent doses of 200 mg, 200 mg, and 120 mg for AB, PM, and ABPM, respectively. Co(II) adsorption followed the trend AB > ABPM > PM with Adsorption percentages of 77%, 71% and 64%, respectively. Salts had a strong negative effect on the adsorption due to the increase of the ionic strength and the competition for adsorption sites. The presence of Pb(II) ions, surfactant, and dye BY57 had a slightly negative effect on the adsorption, apparently due to their interaction with different adsorption sites that do not interfere with the removal of Co(II). A polar-electrostatic adsorption mechanism is proposed based on the experimental results. Scanning electron microscopy indicates that adsorbent has appropriate morphological and textural properties, and also that ABPM encapsulated most of the PM inside of the hydrogel beads. These experimental results revealed that AB, PM, and ABPM are promising adsorbents for the elimination of Co(II) ions from aqueous solutions under different experimental conditions. These biopolymers are proposed as eco-friendly alternatives for the removal of heavy metal ions at lower costs than the conventional techniques.Keywords: adsorption, Co(II) ions, alginate hydrogel beads, spent peppermint leaf, pH
Procedia PDF Downloads 1281658 Entrepreneurship in Pakistan: Opportunities and Challenges
Authors: Bushra Jamil, Nudrat Baqri, Muhammad Hassan Saeed
Abstract:
Entrepreneurship is creating or setting up a business not only for the purpose of generating profit but also for providing job opportunities. Entrepreneurs are problem solvers and product developers. They use their financial asset for hiring a professional team and combine the innovation, knowledge, and leadership leads to a successful startup or a business. To be a successful entrepreneur, one should be people-oriented and have perseverance. One must have the ability to take risk, believe in his/her potential, and have the courage to move forward in all circumstances. Most importantly, have the ability to take risk and can assess the risk. For STEM students, entrepreneurship is of specific importance and relevance as it helps them not just to be able to solve real life existing complications but to be able to recognize and identify emerging needs and glitches. It is becoming increasingly apparent that in today’s world, there is a need as well as a desire for STEM and entrepreneurship to work together. In Pakistan, entrepreneurship is slowly emerging, yet we are far behind. It is high time that we should introduce modern teaching methods and inculcate entrepreneurial initiative in students. A course on entrepreneurship can be included in the syllabus, and we must invite businessmen and policy makers to motivate young minds for entrepreneurship. This must be pitching competitions, opportunities to win seed funding, and facilities of incubation centers. In Pakistan, there are many good public sector research institutes, yet there is a void gap in the private sector. Only few research institute are meant for research and development. BJ Micro Lab is one of them. It is SECP registered company and is working in academia to promote and facilitate research in STEM. BJ Micro Lab is a women led initiative, and we are trying to promote research as a passion, not as an arduous burden. For this, we are continuously arranging training workshops and sessions. More than 100 students have been trained in ten different workshops arranged at BJ Micro Lab.Keywords: entrepreneurship, STEM, challenges, oppurtunties
Procedia PDF Downloads 1291657 An Agent-Based Model of Innovation Diffusion Using Heterogeneous Social Interaction and Preference
Authors: Jang kyun Cho, Jeong-dong Lee
Abstract:
The advent of the Internet, mobile communications, and social network services has stimulated social interactions among consumers, allowing people to affect one another’s innovation adoptions by exchanging information more frequently and more quickly. Previous diffusion models, such as the Bass model, however, face limitations in reflecting such recent phenomena in society. These models are weak in their ability to model interactions between agents; they model aggregated-level behaviors only. The agent based model, which is an alternative to the aggregate model, is good for individual modeling, but it is still not based on an economic perspective of social interactions so far. This study assumes the presence of social utility from other consumers in the adoption of innovation and investigates the effect of individual interactions on innovation diffusion by developing a new model called the interaction-based diffusion model. By comparing this model with previous diffusion models, the study also examines how the proposed model explains innovation diffusion from the perspective of economics. In addition, the study recommends the use of a small-world network topology instead of cellular automata to describe innovation diffusion. This study develops a model based on individual preference and heterogeneous social interactions using utility specification, which is expandable and, thus, able to encompass various issues in diffusion research, such as reservation price. Furthermore, the study proposes a new framework to forecast aggregated-level market demand from individual level modeling. The model also exhibits a good fit to real market data. It is expected that the study will contribute to our understanding of the innovation diffusion process through its microeconomic theoretical approach.Keywords: innovation diffusion, agent based model, small-world network, demand forecasting
Procedia PDF Downloads 3411656 Plasmonic Nanoshells Based Metabolite Detection for in-vitro Metabolic Diagnostics and Therapeutic Evaluation
Authors: Deepanjali Gurav, Kun Qian
Abstract:
In-vitro metabolic diagnosis relies on designed materials-based analytical platforms for detection of selected metabolites in biological samples, which has a key role in disease detection and therapeutic evaluation in clinics. However, the basic challenge deals with developing a simple approach for metabolic analysis in bio-samples with high sample complexity and low molecular abundance. In this work, we report a designer plasmonic nanoshells based platform for direct detection of small metabolites in clinical samples for in-vitro metabolic diagnostics. We first synthesized a series of plasmonic core-shell particles with tunable nanoshell structures. The optimized plasmonic nanoshells as new matrices allowed fast, multiplex, sensitive, and selective LDI MS (Laser desorption/ionization mass spectrometry) detection of small metabolites in 0.5 μL of bio-fluids without enrichment or purification. Furthermore, coupling with isotopic quantification of selected metabolites, we demonstrated the use of these plasmonic nanoshells for disease detection and therapeutic evaluation in clinics. For disease detection, we identified patients with postoperative brain infection through glucose quantitation and daily monitoring by cerebrospinal fluid (CSF) analysis. For therapeutic evaluation, we investigated drug distribution in blood and CSF systems and validated the function and permeability of blood-brain/CSF-barriers, during therapeutic treatment of patients with cerebral edema for pharmacokinetic study. Our work sheds light on the design of materials for high-performance metabolic analysis and precision diagnostics in real cases.Keywords: plasmonic nanoparticles, metabolites, fingerprinting, mass spectrometry, in-vitro diagnostics
Procedia PDF Downloads 1381655 Evaluation of Gene Expression after in Vitro Differentiation of Human Bone Marrow-Derived Stem Cells to Insulin-Producing Cells
Authors: Mahmoud M. Zakaria, Omnia F. Elmoursi, Mahmoud M. Gabr, Camelia A. AbdelMalak, Mohamed A. Ghoneim
Abstract:
Many protocols were publicized for differentiation of human mesenchymal stem cells (MSCS) into insulin-producing cells (IPCs) in order to excrete insulin hormone ingoing to treat diabetes disease. Our aim is to evaluate relative gene expression for each independent protocol. Human bone marrow cells were derived from three volunteers that suffer diabetes disease. After expansion of mesenchymal stem cells, differentiation of these cells was done by three different protocols (the one-step protocol was used conophylline protein, the two steps protocol was depending on trichostatin-A, and the three-step protocol was started by beta-mercaptoethanol). Evaluation of gene expression was carried out by real-time PCR: Pancreatic endocrine genes, transcription factors, glucose transporter, precursor markers, pancreatic enzymes, proteolytic cleavage, extracellular matrix and cell surface protein. Quantitation of insulin secretion was detected by immunofluorescence technique in 24-well plate. Most of the genes studied were up-regulated in the in vitro differentiated cells, and also insulin production was observed in the three independent protocols. There were some slight increases in expression of endocrine mRNA of two-step protocol and its insulin production. So, the two-step protocol was showed a more efficient in expressing of pancreatic endocrine genes and its insulin production than the other two protocols.Keywords: mesenchymal stem cells, insulin producing cells, conophylline protein, trichostatin-A, beta-mercaptoethanol, gene expression, immunofluorescence technique
Procedia PDF Downloads 2151654 Experimental and Numerical Performance Analysis for Steam Jet Ejectors
Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed
Abstract:
The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.Keywords: solar energy, jet ejector, vacuum, evaporating effects
Procedia PDF Downloads 6211653 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil
Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado
Abstract:
During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)
Procedia PDF Downloads 1621652 Altered Expression of Ubiquitin Editing Complex in Ulcerative Colitis
Authors: Ishani Majumdar, Jaishree Paul
Abstract:
Introduction: Ulcerative Colitis (UC) is an inflammatory disease of the colon resulting from an autoimmune response towards individual’s own microbiota. Excessive inflammation is characterized by hyper-activation of NFkB, a transcription factor regulating expression of various pro-inflammatory genes. The ubiquitin editing complex consisting of TNFAIP3, ITCH, RNF11 and TAX1BP1 maintains homeostatic levels of active NFkB through feedback inhibition and assembles in response to various stimuli that activate NFkB. TNFAIP3 deubiquitinates key signaling molecules involved in NFkB activation pathway. ITCH, RNF11 and TAX1BP1 provide substrate specificity, acting as adaptors for TNFAIP3 function. Aim: This study aimed to find expression of members of the ubiquitin editing complex at the transcript level in inflamed colon tissues of UC patients. Materials and Methods: Colonic biopsy samples were collected from 30 UC patients recruited at Department of Gastroenterology, AIIMS (New Delhi). Control group (n= 10) consisted of individuals undergoing examination for functional disorders. Real Time PCR was used to determine relative expression with GAPDH as housekeeping gene. Results: Expression of members of the ubiquitin editing complex was significantly altered during active disease. Expression of TNFAIP3 was upregulated while concomitant decrease in expression of ITCH, RNF11, TAX1BP1 was seen in UC patients. Discussion: This study reveals that increase in expression of TNFAIP3 was unable to control inflammation during active UC. Further, insufficient upregulation of ITCH, RNF11, TAX1BP1 may limit the formation of the ubiquitin complex and contribute to pathogenesis of UC.Keywords: altered expression, inflammation, ubiquitin editing complex, ulcerative colitis
Procedia PDF Downloads 2621651 Impact of Unusual Dust Event on Regional Climate in India
Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad
Abstract:
A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer
Procedia PDF Downloads 3771650 Zeolite Supported Iron-Sensitized TIO₂ for Tetracycline Photocatalytic Degradation under Visible Light: A Comparison between Doping and Ion Exchange
Authors: Ghadeer Jalloul, Nour Hijazi, Cassia Boyadjian, Hussein Awala, Mohammad N. Ahmad, Ahmad Albadarin
Abstract:
In this study, we applied Fe-sensitized TiO₂ supported over embryonic Beta zeolite (BEA) zeolite for the photocatalytic degradation of Tetracycline (TC) antibiotic under visible light. Four different samples having 20, 40, 60, and 100% w/w as a ratio of TiO₂/BEA were prepared. The immobilization of solgel TiO₂ (33 m²/g) over BEA (390 m²/g) increased its surface area to (227 m²/g) and enhanced its adsorption capacity from 8% to 19%. To expand the activity of TiO₂ photocatalyst towards the visible light region (λ>380 nm), we explored two different metal sensitization techniques with Iron ions (Fe³⁺). In the ion-exchange method, the substitutional cations in the zeolite in TiO₂/BEA were exchanged with (Fe³⁺) in an aqueous solution of FeCl₃. In the doping technique, solgel TiO₂ was doped with (Fe³⁺) from FeCl₃ precursor during its synthesis and before its immobilization over BEA. (Fe-TiO₂/BEA) catalysts were characterized using SEM, XRD, BET, UV-VIS DRS, and FTIR. After testing the performance of the various ion-exchanged catalysts under blue and white lights, only (Fe-TiO₂/BEA 60%) showed better activity as compared to pure TiO₂ under white light with 100 ppm initial catalyst concentration and 20 ppm TC concentration. As compared to ion-exchanged (Fe-TiO₂/BEA), doped (Fe-TiO₂/BEA) resulted in higher photocatalytic efficiencies under blue and white lights. The 3%-Fe-doped TiO₂/BEA removed 92% of TC compared to 54% by TiO₂ under white light. The catalysts were also tested under real solar irradiations. This improvement in the photocatalytic performance of TiO₂ was due to its higher adsorption capacity due to BEA support combined with the presence of Iron ions that enhance the visible light absorption and minimize the recombination effect by the charge carriers. Keywords: Tetracycline, photocatalytic degradation, immobilized TiO₂, zeolite, iron-doped TiO₂, ion-exchange
Procedia PDF Downloads 1061649 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 651648 Comparison between Some of Robust Regression Methods with OLS Method with Application
Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq
Abstract:
The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.Keywords: Robest, LTS, M estimate, MSE
Procedia PDF Downloads 2321647 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control
Authors: Gorazd Karer
Abstract:
When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents are one of the main tasks of the anesthesiologist. However, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. The paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a UDP-based communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of a medical device manufacturer and is implemented in Matlab-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.Keywords: closed-loop control, depth of anesthesia (DoA), modeling, optical signal acquisition, patient state index (PSi), UDP communication protocol
Procedia PDF Downloads 2171646 Metaphors of Love and Passion in Lithuanian Comics
Authors: Saulutė Juzelėnienė, Skirmantė Šarkauskienė
Abstract:
In this paper, it is aimed to analyse the multimodal representations of the concepts of LOVE and PASSION in Lithuanian graphic novel “Gertrūda”, by Gerda Jord. The research is based on the earlier findings by Forceville (2005), Eerden (2009) as well as insights made by Shihara and Matsunaka (2009) and Kövecses (2000). The domains of target and source of LOVE and PASSION metaphors in comics are expressed by verbal and non-verbal cues. The analysis of non-verbal cues adopts the concepts of rune and indexes. A pictorial rune is a graphic representation of an object that does not exist in reality in comics, such as lines, dashes, text "balloons", and pictorial index – a graphically represented object of reality, a real symptom expressing a certain emotion, such as a wide smile, furrowed eyebrows, etc. Indexes are often hyperbolized in comics. The research revealed that most frequent source domains are CLOSINESS/UNITY, NATURAL/ PHYSICAL FORCE, VALUABLE OBJECT, PRESSURE. The target is the emotion of LOVE/PASSION which belongs to a more abstract domain of psychological experience. In this kind of metaphor, the picture can be interpreted as representing the emotion of happiness. Data are taken from Lithuanian comic books and Internet sites, where comics have been presented. The data and the analysis we are providing in this article aims to reveal that there are pictorial metaphors that manifest conceptual metaphors that are also expressed verbally and that methodological framework constructed for the analysis in the papers by Forceville at all is applicable to other emotions and culture specific pictorial manifestations.Keywords: multimodal metaphor, conceptual metaphor, comics, graphic novel, concept of love/passion
Procedia PDF Downloads 671645 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 3341644 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2081643 A Theoretical and Experimental Evaluation of a Solar-Powered Off-Grid Air Conditioning System for Residential Buildings
Authors: Adam Y. Sulaiman, Gerard I.Obasi, Roma Chang, Hussein Sayed Moghaieb, Ming J. Huang, Neil J. Hewitt
Abstract:
Residential air-conditioning units are essential for quality indoor comfort in hot climate countries. Nevertheless, because of their non-renewable energy sources and the contribution of ecologically unfriendly working fluids, these units are a major source of CO2 emissions in these countries. The utilisation of sustainable technologies nowadays is essential to reduce the adverse effects of CO2 emissions by replacing conventional technologies. This paper investigates the feasibility of running an off-grid solar-powered air-conditioning bed unit using three low GWP refrigerants (R32, R290, and R600a) to supersede conventional refrigerants.A prototype air conditioning unit was built to supply cold air to a canopy that was connected to it. The assembled unit was designed to distribute cold air to a canopy connected to it. This system is powered by two 400 W photovoltaic panels, with battery storage supplying power to the unit at night-time. Engineering Equation Solver (EES) software is used to mathematically model the vapor compression cycle (VCC) and predict the unit's energetic and exergetic performance. The TRNSYS software was used to simulate the electricity storage performance of the batteries, whereas the IES-VE was used to determine the amount of solar energy required to power the unit. The article provides an analytical design guideline, as well as a comprehensible process system. Combining a renewable energy source to power an AC based-VCC provides an excellent solution to the real problems of high-energy consumption in warm-climate countries.Keywords: air-conditioning, refrigerants, PV panel, energy storages, VCC, exergy
Procedia PDF Downloads 175